Tag: Strategy

How Tranzform Security Can Help You Reduce Insurance Fraud

Health care spending accounts for 17.8% of GDP in the United States. In Canada total health care spending is reported to be 11% of GDP.

The dispensing of health care uniquely depends of a healthy relationship between health care plan administrators and diagnosing physicians prescribing access to the benefits and services of the plan. In some circumstances, other types of insurance (i.e., property and casualty and workplace insurance) draw on these same resources from dispensing medical benefits and services of their plans.

Diagnosing physicians are expected to diagnosis illnesses and injuries accurately, and to prescribe only necessary services. These physicians, as well as other regulated professionals, are guided by controls to assure the billing integrity of the system. When plan administrators have deception concerns beyond billing integrity issues, they may make referrals for investigation. Wrong-doing can end up at civil and/or criminal proceedings.

Physicians are the gatekeepers for access to the plans. Their influence and the importance of their cooperation can not be overstated. No group is better positioned to offer advice on reducing waste, misuse and abuse to a wide range of health care products and services (i.e., pharmaceuticals, hospitalization, rehabilitation, durable medical equipment, home care, physio therapy etc).

Controlling waste, misuse, and abuse of health care resources is foremost a people challenge. Without trust and cooperation between plan administrators and diagnosing physicians to control waste, misuse and abuse, plans will continue to be exposed to avoidable financial harms. Yet, it is still inevitable that some will cheat the system when tempted, and from a range of environmental pressures. The science is in how you treat mostly honest people. Believe us, their contemporaries are watching. 

Why Work with Us?

We think of insurance plans as complex systems. We draw on two bodies of expertise: i) Behavioral Insights teams to control diagnosing and other practitioner billing incidents when people are tempted to do bad things, ii) Data Science teams early detection of high risk hot spots and patterns, and iii) Situational Crime Prevention Science teams to detect, prevent and reduce predatory criminal fraud; and 

Applying Behavioral Insights to Temptations

Our behavioral insights team operates with specific beliefs about trusted diagnosing and services partners: 

  • With the exception of a few, most people are moral and who, from time to time, do bad things when tempted;
  • Early detection and correction is critical. Once the Rubicon has been crossed from billing integrity to cheating a little bit, it becomes easier to rationalize escalating bad behavior, and no-more-so than in environments which offer excuses;
  • Diagnosing physicians are the gatekeepers. They are the eyes and ears of the system, offering boundless opportunity to minimize waste, misuse and abuse of the plan (i.e., beneficiary entitlement, medical identity fraud, pharmaceuticals, hospitalization, rehabilitation, durable medical products, home care etc.);
  • There is no cooperation in preserving the system without mutual respect and trust between plan administrators and trusted billing providers, and
  • The language used and actions taken against mostly honest people in the trusted billing ecosystem are not the same as for predatory fraud attacks by people without moral conscience.

From the science on “tit for tat” (reciprocal altruism) it is predictable that most physicians are willing to  cooperate with plan administrators in reducing losses from waste, misuse and abuse. But they will expect cooperation in return. Building and sustaining trust and cooperation is complex. It is dynamic – it never ends.

A Problem-solving to Fraud Controls for Countering Predators

Predatory fraud attacks from outside the system, and by the few morally bankrupt inside the System, is a problem of a different nature. These are people without moral conscience.

We apply the problem-solving skills and lessons learned from situational crime prevention to identify and reduce fraud attacks.

We teach the insurance sector how to develop Stakeholder partnerships, and how to engage teams of expertise in identifying and attacking the root causes of potential fraud trends, hot spots, trends and patterns.

We introduce our clients to a Situational Health Care Fraud Prevention Matrix  designed from years of research and experience with health care fraud controls. Using this model, enforcement is applied as one of multiple intervention tools for reducing fraud problems.

What’s the difference between Business Continuity (BC) and Disaster Recovery (DR)?

What’s the difference between Business Continuity (BC) and Disaster Recovery (DR)? This is a question I have had to answer multiple times. It is a very good question and the answer is not simple! So, as a good lazy ‘techy’, I tried to find the answer on the web. That way, when I am asked, all I would have to do is send a link.

I have used this approach multiple times for other questions I have received. It is convenient and a great way to avoid re-typing an answer. However, this time, I was not very successful in my quest to find an answer. I searched the web, multiple times, for hours without finding the perfect “pre-written answer” I was looking for. So I decided to stop being lazy and write it myself.

Now, if you are like me, and you’ve been looking for an answer to this question, feel free to use this one.

So, let’s start with a few definitions from the Business Continuity Institute (BCI) Glossary:

Disaster Recovery (DR): “The strategies and plans for recovering and restoring the organizations technological infrastructure and capabilities after a serious interruption. Editor’s Note: DR is now normally only used in reference to an organization’s IT and telecommunications recovery.

Business Continuity (BC): “The strategic and tactical capability of the organization to plan for and respond to incidents and business disruptions in order to continue business operations at an acceptable predefined level.”

First, I’d like to say that I have a slightly different view of DR than BCI. Now, who am I to disagree with what BCI is saying? Well, bear with me a little longer and you will see how my interpretation of DR might help people understand the differences between DR and BC better. So here’s my definition:DR is the strategies and plans for recovering and restoring the organizations (scratch technological) infrastructures and capabilities after an interruption (regardless of the severity).

Unlike the BCI, I don’t make a distinction between the technological infrastructure and the rest of the infrastructures (the buildings for example) and nor I do differentiate between the types of interruptions. In my opinion, either a system is down or a building is burnt or flooded, both should be considered a disaster and therefore both require a disaster recovery plan.

Therefore DR is the action of fixing a failing, degraded or completely damaged infrastructure. For example, the 2nd floor of a building was on fire; the fire is now out so the initial crisis is over. Now the damage caused by fire must be dealt with; there is water and smoke on the 2nd floor, the 3rd floor has damages caused by smoke and the 1st floor has water damage. The cleanup, replacement of furniture, repair of the building and its structure, painting, plastering, etc. are all part of the disaster recovery plan.

What is Business Continuity then? Business Continuity is how you continue to maintain critical business functions during that crisis. Back to the example, when the fire started, the alarm went off and people were evacuated from the building. Let say you had a Call Center on the 2nd floor and this just happens to be a critical area of your business. How would you continue to answer calls while people are being evacuated? How would you answer calls while the building is being inspected, repaired or rebuilt? Keeping the business running during this time is what I call Business Continuity.

The same approach can be taken with a system crash or when the performance of a system has degraded to the point that it has impacted business operations. So fixing the system is DR and the action of keeping the business operations running without the system being available is BC.

In conclusion, BC is all about being proactive and sustaining critical business functions whatever it takes whereas DR is the process of dealing with the aftermath and ensuring the infrastructure (system, building, etc.) is restored to the pre-interruption state.

Cloud Computing Defined

Welcome to the first installment of what will be an on-going series on Cloud Computing.  Everyone in the industry is talking about it and the media is awash with hype.  I’ll be taking a different approach by trying to bring some clarity and reason to help you make informed decisions in this area.

The place to start is with a definition of the term.  There are a wide variety of sources that attempt to define Cloud Computing, many with subtly different nuances, and often including benefits (e.g. flexibility, agility) that are potential outcomes from going Cloud, but certainly don’t belong as part of its definition.

I prefer the U. S. Department of Commerce National Institute of Standards and Technology (NIST) draft definition which is well-considered, well-written and carries the weight of a standards body behind it.  Their definition sets out five essential characteristics:

  1. On-demand self-service: This means that a service consumer can add or delete computing resources without needing someone working for the service provider to take some action to enable it.
  2. Broad network access: Note that the NIST definition does not specify that the network is the Internet (as some other definitions do).  This is necessary to allow for private clouds.  NIST goes on to say that cloud services use standard mechanisms to promote use by a wide variety of client devices.  It’s not clear to me that this should be a full-fledged requirement, but is certainly in keeping with the spirit of the cloud concept.  Once could imagine a cloud that uses custom protocols agreed to by a closed group of consumers, but perhaps the word “standard” still applies in that it would be a standard across the consumer group.
  3. Resource pooling: Also known as multi-tenancy, this characteristic requires that the cloud serve multiple consumers, and that the resources be dynamically assigned in response to changes in demand.  The definition goes on to say that there is also a sense of location independence in that the consumer has no control over the location where the computing takes place.  It is important to distinguish “control over” from “knowledge of”.  The consumer may well know which specific data centre the resources are running in, particularly in the case of a private cloud.  There may also be limitations for compliance or security purposes on where the resources can be drawn from.  The important point is that the consumer cannot pick and choose between resources of a particular class; they are assigned interchangeable resources by the provider, wherever they happen to reside, within the limits of the service agreement.
  4. Rapid elasticity: Capabilities need to be rapidly provisioned when demanded.  The definition does not specify how rapidly, but the intent is that it be in a matter of minutes at most.  The service must be able to scale up and back down in response to changes in demand at a rate that allows potentially unpredictably varying demands to be satisfied in real time.  Ideally, the scaling is automatic in response to demand changes, but need not be.  The definition then goes on to say that the resources often appear to be infinite to the consumer and can be provisioned in any quantity at any time.  This is of course not a rigid requirement.  A cloud service could put an upper bound on the resources a particular consumer could scale to, and all clouds ultimately have a fixed capacity, so this clearly falls in the “grand illusion” category.
  5. Measured service: The NIST definition specifies that cloud systems monitor and automatically optimize utilization of resources.  The definition does not specify the units of measurement, and in fact Amazon and Google’s cloud services meter and charge using very different models (in thumbnail, Amazon in terms of infrastructure resources and Google by page hits).  What is surprising is that the definition does not state that the consumer is charged in proportion to usage, which many definitions consider the most fundamental tenet of cloud computing.  The NIST definition allows a situation, for example, where several consumers (say, members of a trade organization) decide to fund and build a computing facility meeting the five requirements and share its use, but don’t charge back based on usage even though it were possible.

There’s a lot to like about the NIST definition and it is the one I’ll be using in subsequent articles.  We’ll be digging into what people and organizations are actually doing with cloud computing (without all the hype and hyperbole), and practical considerations for success from both the business and technical viewpoints.

Larry Simon is an IT strategist who has advises startups through to Fortune 500 corporations and government institutions on achieving maximum return from their information technology investments through sound governance and practices.

Should Business Strategy be Influenced by Technological Considerations?

Can business strategy be created in isolation of the technology considerations? There is a widespread belief in the Business Community that Business Strategy comes first and then technology follows in some way to support that business.

In my experience the common perception among organizations is that Business defines its strategy first and then technology enables the strategy.

Strategy Development Process:

In order to explore the role technology plays in shaping and supporting the business, let’s look at how strategies are developed.  There has been a significant amount of research done and published in understanding how strategies are developed.  Here are some relevant highlights.

There are two main dimensions to strategy development.

  1. Visionary thinking based on intuition, a sense, an ability to make bold predictions and define goals.
  2. Strategy development is largely based on scientific analysis, considering options and recommendations based on the analysis followed by implementation.
    • Strategic Analysis guided by scientific approach understanding your markets, competitors, value chain, bargaining power of the key stakeholders.  It also entails understanding the strengths and weaknesses of your organization against the opportunities and threat that the external environment presents
    • Strategy Formulation guided by analytical findings, alignment to the vision and overall goals of the organization to create a strategic road-map
    • Strategy Implementation is of course converting the strategy to real results by successfully implementing the strategy

It is the strategy development that is the focus of this article. Specifically, strategic analysis which then guides the strategy formulation and implementation.

Is there a place for technological consideration in strategic analysis? The answer is quite apparent as demonstrated through examples next.

Technological Influences on the Business Landscape

Examples of technologies that have had transformation impact on business value chain and have redefined markets and distribution channels are all around us.

The globalization phenomenon enabled by the internet is one of most profound. The Internet has impacted all the traditional dimensions of business strategy (reduction in barriers to entry, increased market size across the globe without limitations of geographic divide, increased competition etc.).

Financial services industry is a prime example of an industry where technology has transformed the value chain, redefined competitive forces and given the consumers tremendous amount of bargaining power.  Entry barrier have been declining, new competitor have emerged. Some financial products and services have become more transparent and commodities making the market more competitive. Internet as a tool to create a new service delivery channel (reduced channel costs, 24 by7 availability) has put pressure on the more traditional branch based channels. The resulting service delivery cost structure has changed. ING is operating on the model that bricks and mortar are not required to sell its banking products and services.

Healthcare value chain has been transformed by technological advances, linking healthcare records through electronic information exchange, diagnostic imaging from traditional film based to digital imaging has redefined the value chain and changed the balance of power between the suppliers, buyers not to mention the very nature of the products and services being delivered.

Retail Industry is another such example where technology has changed the business landscape.  Amazon’s strategic business model was completely defined by technology.

Relationship between Business and Technology

Given how profoundly technology has influenced our business and personal lives, it is hard to fathom how a successful business strategy can be defined without considering technological influences and enablers.  By creating a partnership between Business and Technology at the Strategy development stage, you are creating a strategy that is well formed and can maximize business value and competitive positioning by embedding technological considerations from the very start (and not an after thought!).

So why is it that there is a significant divide between the Business and Technology?  In subsequent articles, I will focus on why there is this barrier (real or perceived) that creates this divide between Business and Technology.

If you have examples to demonstrate the benefits of business/technology partnerships, please share your thoughts on this forum.

The Implicit Value of Content is Realized Through Business Process

As I have noted before, much of the historic discussion in the document management field has concerned the cost of producing content, or the cost of finding existing content.But the value of a document, or any other piece of content, is seldom the same as its cost of production.I was chatting about this the other day with my colleague James Latham. He used an invoice as an example of a piece of content that may be managed by an enterprise content management (ECM) system. James noted that, ‘There is inherent or explicit value in an invoice’. In fact the value of an invoice is fairly tightly linked to the cash it represents.A $10 bill has an explicit value of $10. Likewise a delivered invoice for $10 has a value of about $10 to an organization. Arguably it is not quite as valuable as $10 cash given the delay and perhaps uncertainty of payment, but it is close enough in most cases and will be treated as such in an accounting system.There is a case where a $10 bill is worth much more: if it is a rare, old $10 bill, it may have a lot of implicit value (e.g. to collectors it may be worth hundreds of dollars) above its explicit value of $10.Tangible value (explicit plus implicit) is established by sale of the item itself or the recent valuations of comparable items. But it is hard to think of invoices, especially electronic invoices (i.e. digital content), as having any implicit value.Are there other kinds of enterprise content besides invoices that clearly have implicit value? I think so. Here’s a good example: documents that support a patent application for a product with large market potential may have huge implicit value that greatly exceeds their cost of production and their explicit value at a given moment. This implicit value may become more explicit over time with the issue of a patent, together with product and market advances. At some point an intellectual property sale could attribute very significant tangible value to the documentation.In this patent documentation example, the application of process over time helps to create tangible value. In ECM discussions we often speak of the context of content as helping to give it meaning, but clearly we also need to consider how process can give it value.

Enterprise Content Architecture – my take on the Metastorm acquisition

I’m particularly excited by today’s announcement acquisition of Metastorm by OpenText, but not perhaps for the same reasons as many others.What excites me is the potential of Metastorm’s strengths in Enterprise Architecture (EA) and Business Process Analysis (BPA). As noted in the release:“Metastorm is a leader in both BPA and EA as recognized by Gartner in the Gartner Magic Quadrant for Business Process Analysis Tools, published February 22, 2010 and the Gartner Magic Quadrant for Enterprise Architecture Tools, published October 28, 2010.”These capabilities play to both the ‘Enterprise’ and ‘Content’ in Enterprise Content Management (ECM).Organizations depend on a growing proportion of knowledge workers as I discussed in a previous post (Value for Knowledge Workers), but as noted in the McKinsey study I covered (Boosting the productivity of knowledge workers),  most organizations do not understand how to boost the productivity of knowledge workers or indeed the barriers to that productivity. As I noted:“What struck me in reading the article is that while an increasing proportion of staff in companies are knowledge workers, it is not clear what knowledge work is and how to best enable it to drive productivity gains. Given that, it is hardly surprising that people struggle to define the value of those software tools best able to support knowledge management.”Content is the currency of knowledge work. It supports the exchange of knowledge during business processes, and is very often the work product of such processes (e.g. a market analysis report, an engineering drawing or a website page). Too often in the past discussion of the value of content has centered on either reducing the unit cost of producing, finding or using content, or mitigating compliance risks created by poor content management.This is not a new theme for me, indeed last August I expressed my enthusiasm for why Content Matters. I noted:“It’s no surprise to people that you can understand a business by ‘following the money’ or ‘following the customer’ and that is the basis for ERP and CRM systems. On the other hand most people are only just coming to realize that ‘following the content’ is just as important, so while we’ve talked about content management for many years, that conversation is starting to be important to business.”The potential to apply Metastorm’s ProVision tool set to elucidate and illustrate the critical role of Content to the achievement of Enterprise Goals is an exciting one which offers new value to our customers.

Considering the Cost & Value of Digital Content for an Enterprise

The way that the value of digital content changes over time, and how an enterprise content management (ECM) system might help to realize and/or retain greater value was the subject of my last post (http://martin-fulcrum.blogspot.com/2010/06/calculating-value-of-content-in-ecm.html). Lee Dallas retweeted that post, but also referenced a very interesting earlier blog post (2008) by fellow member of ‘Big Men on Content‘ Marko Sillanpääon the cost of content (link). Sillanpää considered content lifecycle costs as follows:Cost of Content = (Annual Authoring Costs + Annual Review Costs) / New Objects per AuthorContent authoring and review are not the only activities that incur cost – there are costs associated with each step in its lifecycle, notably including the costs of distribution, storage and ultimate destruction. Effective content distribution is becoming increasingly important to the realization of value.Cost and value are of course different concepts. The cost of an item does not necessarily reflect its value, as anyone who has watched the TV show “Antiques Roadshow” knows!In business, where there is an emphasis on the bottom line, the value of content ought on average to exceed its cost, or it should not have been created. But for a given piece of content, its cost is generally related to size and complexity, not what it enables. On the other hand, value is tied to enablement and varies over time – often declining gradually or precipitously, but sometimes increasing!It can be hard to explain to people how managing content benefits a business. However, I have found that identifying its ‘enterprise value’ is powerful. A good top-down approach is to reference the value chain of a business, using Michael Porter’s original simple model.People understand that enterprises take input from suppliers and partners and, through a series of steps, add value that can be realized in a final sale to customers. Clearly the effective execution of those steps adds to efficiency. When challenged, most people can identify content that contributes or is even essential to the completion of each of those value steps and their constituent processes. For example, an Engineering Department must create, review and approve engineering drawings, and then pass them on to the Manufacturing Department (see E, C & O value chain).In my experience, taking a value perspective is generally more attractive, especially in growth industries, than a cost and cost avoidance perspective – which has classically been the basis for return-on-investment (R.O.I.) approaches to software justification.  Syndicated at http://conversations.opentext.com/

Taking the Pulse of your Business Content with microblogging

When many users first encounter microblogging they don’t ‘get it’. Twitter is of course the classic and most widely known microblogging site, and its style has been taken up by many others such as Facebook in a broader set of social media approaches. A common initial reaction is something to the effect: “I don’t care if your cat just threw up – in fact, I’d rather NOT know!!”Once people start to microblog, they find many different ways that it can provide value, beyond answering the question: What’s happening? that twitter poses. Commentators have described endless ways of using twitter such as: 5 marketing approaches, 10 diverse applications, 50 different topics, etc.But how does microblogging add value within an organization? Most of the discussions about business value have been on better ways to reach outside an organization to customers and partners by breaking down barriers, increasing transparency and the like.At first blush making the case for microblogging in the workplace might seem to be hard. People often comment that they are too busy to engage in ‘chit-chat’ while at work. But over the last couple of years the use cases that have real business value have become clearer.For me there are two general styles of internal business microblogging:

  1. User status updates – close to the twitter model, but with a distinctly different topic set
  2. Content status updates – fairly unique to business and keyed to the fact that many work processes produce and manage content (i.e. documents and other business files as understood in content management)

At Open Text we recently released the Pulse module for Livelink 9.7.1 that adds microblogging capabilities to support both styles (available for free to customers from the Knowledge Center).Status updates are pretty much what you’d expect – you can make a post about anything, although some of the most useful ones are:

  • “I’m looking for…”
  • “Anyone interested in…”
  • “Have we…”

These have value because they help people to be more effective through better networking in an organization.You can select specific users to follow and you can follow the stream from all users. We have a very similar Pulse capability in Open Text Social Workplace.BUT, I think the real advance in Livelink/Content Server Pulse is to follow the status of content irrespective of location in a range of very powerful and comprehensive ways.Sure you can post a link to content in twitter, and many microblogging services allow you to attach documents or other kinds of files to your posts. But the advance here is to have the act of adding or changing a piece of content anywhere in an ECM system create a status post. The feed is reporting a content action by another person. If I’m following Joe and he adds a new sales presentation anywhere I can see it in the status stream – provided of course I have permission in the repository to see the added content. All of the important support for compliance is maintained.There are many ways to slice-and-dice: by following specific people or all people, and following changes in user status, content or both.You can also ‘pulse’ specific content objects, so all changes and all comments about a piece of content are seen in the unique Pulse stream of that object. It’s like a filtered window into the stream looking at just one object, even if the ECM system contains millions of documents.And Pulsing is not just limited to files/documents, but is applied to containers like folders and places such as project workspaces and communities. You can imagine the power of an accumulated stream of all content and status activity related to a project!Livelink has had a notification capability for many years, but it requires users to first identify existing documents and containers that they would like to follow. Pulse adds the human dimension – you can be notified of changes based on the people you follow and what they do with the content.To honest I’m still ‘figuring out’ all of the ramifications and power of Livelink/Content Server Pulse but I’m very excited!  If you’d like to learn more:

  • Initial description in the May 2010 issue of NewsLink
  • Free Webinar Thursday 3 June 2010
  • Software and documentation in the Knowledge Center
  • And if you are an Open Text Online Communities member you’ll be able to use Pulse very shortly (announcement)

Syndicated at http://conversations.opentext.com/

Customer Community Success Metrics for 2009

Customer communities are all the rage nowadays, but it is not always clear what works and indeed how to measure success.As 2009 draws to a close we have been reviewing how Open Text customer communities have been doing.Background: For those not familiar with Open Text, we are a vendor of enterprise-class software to manage digital files (called content). The term enterprise indicates that we sell to organizations not consumers. We have relatively few customer organizations, but they are often typically some of the biggest organizations in business and government. We estimate that at least 1 in 3 Internet users visit sites that run our software! The software we use for our own communities is the same as we sell. The SitesFor historic reasons, we run three primary community sites (requiring membership) in addition to our typical corporate websites. The community sites are:

  • Open Text Knowledge Centre (KC)
    • Primarily for system administrators of the software we sell
  • Open Text Developer Network (OTDN) which is housed on the KC
    • Primarily for developers using Open Text APIs
  • Open Text Online Communities
    • Primarily for business champions and power users

Site Metrics

  1. The Knowledge Centre is by far the oldest community, dating back to 1996! As you’d expect, it has the most members and the most ongoing activity. Every day approximately 4,000 users access the site, and between 150,000-200,000 documents downloads are performed every month!
  2. OTDN just completed its first full year during which just over 3,200 unique users participated over the past year
  3. Online Communities got started in its present form in 2005. This last year 10,600 members collectively visited 118,000 times over the year

These numbers only measure direct participation. As you might expect, many community members participate through email-mediated discussions.ConvergenceMultiple systems have traditionally meant that there are multiple, disconnected silos of information. As a result, users don’t know where to look and administrators have to duplicate critical content between systems.A better approach is to deploy a single, ‘enterprise library’ of digital files (content) which contains all of the files, but just one active copy of each. The three sites above will soon converge to use the same enterprise library, which will also be used by our corporate website that is open to the general public.One single repository can make user navigation harder unless the most relevant content is presented and organized in a fashion that best meets the needs of each type of user (i.e. persona). Communities of users with similar interests or jobs are one approach to organizing content, but of course there are others, including personalization based on the activities and preferences of specific users.Measuring 2010 successThese communities will continue to develop, but the latest social networking approaches provide new ways to surface important content. As we deploy more social networking approaches during 2010 we’ll have a solid base of community metrics from 2009 to judge progress. As you might expect, activities on external sites like twitter, YouTube and facebook are becoming increasingly important.Syndicated at http://conversations.opentext.com/