Tag: Design

Design Thinking: a foundation for Business Innovation

“Innovate or Die” is a popular expression among thought leaders to describe the economic and cultural climate we now do business in.  Technology’s impact on people and business continues to evolve the way we communicate, work and live at a disruptive pace.  Businesses need to respond by innovating their business models and experimenting with new methods of doing business.

Innovation today requires a creative way of thinking, Design Thinking.  We need a foundation to experiment with new ideas that apply emerging tools and models to existing methods of doing business.  Design thinking is not new, though its popularity has grown in the digital age as a practical methodology for leveraging emerging technologies and modernizing business models.

Applying design thinking to innovation adopts a solution-focused mindset that combines empathy, rationality and creative insights into business, technology and people.  This is important because integrating emerging technologies is simply not enough to achieve competitive advantage and sustainability.  We must not only innovate the tools we use, but the methods in which we use them.  We must understand that current business cultures and practices can be a barrier to innovation.  We need to recognize that consumers are more tech-savvy and social than ever before.  We need to enable people as much as we enable the business.

Design Thinking model illustrating the innovation zones and relationships between business, people and technology:

Design Thinking Model

Popular words of caution, “It’s not about the technology…” reminds us to be empathetic to be innovative.  Empathetic to who? People. The people that work for your business and the people that buy your products and services.  When we understand how culture is evolving inside and outside the business, when we understand the behaviors of consumers, we will be better equipped to innovate.

The challenge today is to break free from our comfort zone to unlearn and rethink how we do business and foster new competitive advantage.  Our world is in a state of great change, we need to take a step back and be willing to open our mindsets.  I believe the methodologies of Design Thinking can be applied to business innovation and guide leaders in the right direction.  There is as much opportunity as there is threats in the digital age and businesses will either “Innovate or die.”

http://www.youtube.com/watch?v=6axgrZUrqH8

Part 2 – Design Thinking: methods to business innovation

References:

Cloud Computing Defined

Welcome to the first installment of what will be an on-going series on Cloud Computing.  Everyone in the industry is talking about it and the media is awash with hype.  I’ll be taking a different approach by trying to bring some clarity and reason to help you make informed decisions in this area.

The place to start is with a definition of the term.  There are a wide variety of sources that attempt to define Cloud Computing, many with subtly different nuances, and often including benefits (e.g. flexibility, agility) that are potential outcomes from going Cloud, but certainly don’t belong as part of its definition.

I prefer the U. S. Department of Commerce National Institute of Standards and Technology (NIST) draft definition which is well-considered, well-written and carries the weight of a standards body behind it.  Their definition sets out five essential characteristics:

  1. On-demand self-service: This means that a service consumer can add or delete computing resources without needing someone working for the service provider to take some action to enable it.
  2. Broad network access: Note that the NIST definition does not specify that the network is the Internet (as some other definitions do).  This is necessary to allow for private clouds.  NIST goes on to say that cloud services use standard mechanisms to promote use by a wide variety of client devices.  It’s not clear to me that this should be a full-fledged requirement, but is certainly in keeping with the spirit of the cloud concept.  Once could imagine a cloud that uses custom protocols agreed to by a closed group of consumers, but perhaps the word “standard” still applies in that it would be a standard across the consumer group.
  3. Resource pooling: Also known as multi-tenancy, this characteristic requires that the cloud serve multiple consumers, and that the resources be dynamically assigned in response to changes in demand.  The definition goes on to say that there is also a sense of location independence in that the consumer has no control over the location where the computing takes place.  It is important to distinguish “control over” from “knowledge of”.  The consumer may well know which specific data centre the resources are running in, particularly in the case of a private cloud.  There may also be limitations for compliance or security purposes on where the resources can be drawn from.  The important point is that the consumer cannot pick and choose between resources of a particular class; they are assigned interchangeable resources by the provider, wherever they happen to reside, within the limits of the service agreement.
  4. Rapid elasticity: Capabilities need to be rapidly provisioned when demanded.  The definition does not specify how rapidly, but the intent is that it be in a matter of minutes at most.  The service must be able to scale up and back down in response to changes in demand at a rate that allows potentially unpredictably varying demands to be satisfied in real time.  Ideally, the scaling is automatic in response to demand changes, but need not be.  The definition then goes on to say that the resources often appear to be infinite to the consumer and can be provisioned in any quantity at any time.  This is of course not a rigid requirement.  A cloud service could put an upper bound on the resources a particular consumer could scale to, and all clouds ultimately have a fixed capacity, so this clearly falls in the “grand illusion” category.
  5. Measured service: The NIST definition specifies that cloud systems monitor and automatically optimize utilization of resources.  The definition does not specify the units of measurement, and in fact Amazon and Google’s cloud services meter and charge using very different models (in thumbnail, Amazon in terms of infrastructure resources and Google by page hits).  What is surprising is that the definition does not state that the consumer is charged in proportion to usage, which many definitions consider the most fundamental tenet of cloud computing.  The NIST definition allows a situation, for example, where several consumers (say, members of a trade organization) decide to fund and build a computing facility meeting the five requirements and share its use, but don’t charge back based on usage even though it were possible.

There’s a lot to like about the NIST definition and it is the one I’ll be using in subsequent articles.  We’ll be digging into what people and organizations are actually doing with cloud computing (without all the hype and hyperbole), and practical considerations for success from both the business and technical viewpoints.

Larry Simon is an IT strategist who has advises startups through to Fortune 500 corporations and government institutions on achieving maximum return from their information technology investments through sound governance and practices.

Value for Knowledge Workers

Demonstrable value goes a long way to supporting the deployment of new software tools.

For structured business processes, return on investment (r.o.i.) is comparatively easy to estimate. Where unstructured or semi-structured digital content items (e.g. documents, spreadsheets, faxes, etc.) enable a given structured process (e.g. accounts receivable) their contribution to the overall value created is also typically quantifiable.

Where the process itself is unstructured the measurement of value is much harder. Perhaps the largest class of unstructured processes in a company fall in the category of knowledge work. The difficulties organizations have in understanding knowledge work is highlighted in an article just published in the McKinsey Quarterly entitled: “Boosting the productivity of knowledge workers”.

  • Aside: Unfortunately a subscription is required to read the full article – hopefully you have one.

The article starts with the proposition that few senior executives can answer the question: “Are you doing all that you can to enhance the productivity of your knowledge workers?” This is unfortunate because, “Organizations around the world struggle to crack the code for improving the effectiveness of managers, salespeople, scientists, and others whose jobs consist primarily of interactions—with other employees, customers, and suppliers—and complex decision making based on knowledge and judgment.”

The authors, Eric Matson and Laurence Prusak, describe five common barriers that hinder knowledge workers in more than half of the interactions in surveyed companies:

  1. Physical
  2. Technical
  3. Social or Cultural
  4. Contextual, and
  5. Temporal

Physical barriers include geographic and time zone separation between workers, and are typically linked to Technical challenges – where workers lack the necessary tools to overcome the physical barriers that separate them. As the article notes, there are a many software tools available that can help – these would include the various collaborative and social media tools, as well as the more classic document management applications that are encompassed in the broadest definitions of Enterprise Content Management (ECM).

Of course the availability of software tools does not guarantee that users will use them effectively; indeed, Social (e.g. organizational restrictions, opposing incentives and motivations) and Contextual barriers (e.g. not knowing who to consult or to trust) play a large part in hindering adoption.

The fifth barrier is Temporal. Time, or rather the perceived lack of it, is also a critical factor. In my experience knowledge workers do not consider time spent using social media and collaborative tools as important as other activities. Under time pressure they will stop using these tools if they need to spend more time on other activities they perceive as “real work”.

What struck me in reading the article is that while an increasing proportion of staff in companies are knowledge workers, it is clear that what knowledge work is and how to best enable it to drive productivity gains is not clear. Given that, it is hardly surprising that people struggle to define the value of those software tools best able to support knowledge management.

Content Matters

I was chatting with a colleague yesterday and he related how he interviews people to join our company. We quickly dropped into role playing – with me as the job candidate. He had a compelling proposition, but as I told him, he was missing the thing that excited me = Content Matters!

As I started to tell him why content matters I found myself getting excited. I realized I’m actually quite passionate about it! Not content itself, but what it enables and how it’s used.

Content matters to companies in a way that changes how they work, how they create value and whether they succeed. It matters whether they recognize that fact or not.

If you want to understand what drives a company look at their value chain – how they create value – and how they are organized to execute each stage in the value chain. Within each stage there are typically many processes, each with many steps. At almost every step there is some content that is created, reviewed, followed or otherwise used; how well this is done makes a difference to effectiveness.

It’s no surprise to people that you can understand a business by ‘following the money’ or ‘following the customer’ and that is the basis for ERP and CRM systems.

On the other hand most people are only just coming to realize that ‘following the content’ is just as important, so while we’ve talked about content management for many years, that conversation is starting to be important to business.

Calculating the Value of Content in ECM

It’s only worth expending effort to manage something if it has value – usually positive, but sometimes negative. So the concept of content value is implicit in enterprise content management (ECM).On the other hand, the value of a given content object (i.e. digital file) such as an email or document generally declines over time – or at least this is the common wisdom. I have seen graphs drawn mapping ‘value’ over ‘time’, with a smooth decline of value tending to zero. However, such a representation is clearly an average of value across many types of enterprise content.If you look at individual pieces of content, then you’ll find different profiles:

  • In compliance, a piece of content may retain 100% of its value for a defined period of years and then abruptly drop to having no value, or even having negative value (liability) that should trigger its destruction
  • In knowledge management, a piece of content may have declining value over time, but then because of some new event may suddenly have increased value

But this perspective is of the Inherent or Independent Value of a piece of content – the value is assessed entirely based on the information contained in the object. But it seems to me that there are at least two other factors that impact value:

  • Context – when correctly combined with other prices of content a given piece of content may have greater value. For example a specifications document is more valuable together with the associated requirements document. Value can often be realized by the way in which context is presented between content items – how they are grouped, ordered or ranked.
  • Impairment – Ironically, the value of a piece of content may be impaired by efforts to manage content. If you mix valuable pieces of content with large amounts of irrelevant materials, that should have been destroyed, you reduce the chances that the valuable content can be found and its value realized. Keeping everything is usually a bad idea. And often users impair value when they misclassify content.

So the available value of a piece of content to an organization may be expressed as follows:Available Value = Inherent Value x Context / Impairment What this says is that content management efforts can be beneficial, but if not done well can actually be destructive.

What’s in a name? Or do you mean what I think you do? Implications for enterprise content culture

Most people love a good rant, especially when it is well-founded, and I’m no different.And so it was that I really enjoyed Laurence Hart’s recent, self-admitted rant (http://wordofpie.com/2010/03/04/a-rant-against-cms/). In his Word-of-Pie blog, Laurence railed against his perceived miss-use of the term ‘Content Management Systems’ or CMS. It was topical, well-informed, and most importantly to me, resonated on several fronts.In brief, Laurence’s position is that:“…All you Web CMS people need to give the term CMS back! It doesn’t belong to you. A long time ago you took it while the broader content community was trying to futz with the term ECM [Enterprise Content Management]. By the time we realized what was happening, you had taken the term…”His issue is that while web content management (WCM) is a valid description, it is too often abbreviated to content management (CMS), even though there are a wide range of content types beyond web pages. The common use of CMS is much narrower than is implied. Enterprise Content Management (ECM) was coined in part to describe all content that an enterprise might have.I’m not interested in the semantic debate about what each term means and what is the correct term to use.I am interested in what this discussion says about culture and the difficulty getting people in an enterprise to take a broad view of content.There seem to be at least ‘two solitudes’ in content management – ECM and CMS.It is interesting how specific technology applications shape and restrict expectations.Last year my employer, Open Text, acquired Vignette (history), one of the oldest and most established CMS vendors. Most of Open Text heritage is from document and record management (Livelink and Hummingbird eDOCS for example), process management (IXOS) and collaboration; in other words ECM. We published a trilogy of books on ECM in 2003-2005. While some staff came from acquisitions prior to Vignette that had expertise in WCM (notably RedDot), they represented a comparatively small portion of the Company. The Vignette acquisition brought a much larger group of CMS-oriented staff to Open Text.I think Open Text is richer for the breadth of perspectives, but we have had to work through the challenge to merge the different cultures. Note I’m not talking corporate cultures, as indeed the companies were quite similar, but rather the application culture of how best to manage content to meet all the needs of our enterprise customers. Each of us has tended to think mostly of some content types, some approaches to content management, and some business needs.Take Open Text’s own Intranet as an example. Open Text has been running an Intranet called ‘Ollie’ on Livelink technology since 1996. Fundamentally the Livelink model is one of web folders containing ‘documents’ of any type. This model works really well when individual and team work products to be shared are typically documents – so it’s great in supporting teams and managing records. However, linked webpages are a much better vehicle to support the dissemination of centrally managed content, especially information from an organization to its staff. So last year we broadened our Intranet Systems to include a true WCM capability in parallel.For some in Open Text, the internal use of WCM came none too soon, while for others it was a surprise! I had to make a video to ‘educate’ staff on why we had both approaches and how to choose the best system for their specific needs. It turned out to be easiest to provide context by talking about the parallel evolution of ECM and WCM technologies over the course of the last 15 to 20 years.The application of social media in an enterprise has also challenged cultural expectations.Those with a WCM background have generally talked about the advantages of working closely with customers through external websites. Most of their value propositions of breaking down barriers and being more transparent are absolute anathemas to those ECM practitioners who have focussed on internal process and records management for compliance.Traditional document management approaches provide another example of cultural expectations nurtured by specific technology experiences.As I mentioned above, Livelink used a web folder paradigm to organize content. It also had rich metadata capabilities, but users tend to think of these as supplementary or optional ways of organizing content. It is fair to say that most users tend to think first and foremost of folders – so it can be a challenge to collect metadata from them. In contrast, with our eDOCS content management system (from Hummingbird) there are no folders – everything is organized through metadata. eDOCS users find browsing folders can be frustrating. Going forward these alternate approaches are merged in our Open Text Content Server 2010 under our ECM Suite.Defining effective taxonomies to organize content can be one of the biggest challenges for an enterprise.Generally people in specific departments, and using specific systems, tend to define taxonomies that meet their immediate needs, but the taxonomies they create are generally too limited for wider use. Similarly, other groups create incompatible taxonomies often to address similar needs. These limitations ultimately contribute to failure. Creating new taxonomies seems to be a recurring theme in many enterprises as most are never broad enough, scalable or robust.Ironically then, what a person means by ‘content’, the ‘content taxonomy’ they think is required for their organization, and their perception of the critical features of a ‘content management system’ are all highly subjective!

Predicting Sentiment in Advance

There are now a number of tools that monitor social networks looking at:

  • Sentiment analysis – General sentiments related organizations and their brands
  • Topic trend analysis – the relative frequency that topics are mentioned over time

Therefore, if I make blog post or tweet, my topic and sentiments will be captured by automated systems, analyzed and reported. There are some pretty sophisticated tools being used by Marketers, and while some are free, others are quite expensive. However, as a recent blog post noted: “marketing measurement technology really is.’ href=’http://trenchwars.wordpress.com/2010/02/28/a-technology-glitch-demonstrates-how-fragile-marketing-measurement-technology-really-is/’>A technology glitch demonstrates how fragile marketing measurement technology really is.” That said, let’s assume they’ll get better or this Technorati glitch was atypical.

I can also manually get some trend information using Google trends for example looking at ‘ECM’ over time: http://trends.google.com/trends?q=%22ecm%22&ctab=0&geo=all&date=all&sort=0. However, the information on ECM is too sparse, and there is too much ‘contamination’ with other definitions of ECM, such as Engine Control Management.

But I have to admit I don’t use these tools. So I need a tool like all great advances that caters to laziness – or increased efficiency as I might prefer to characterize it! I need help. I need push rather than pull technology. It occurs to me that I wouldn’t mind knowing how my proposed post relates to other posts already made. I’d get a report something like:

“Your post on ‘ECM’ would be the 47th post on this topic so far this year. This topic is declining in frequency.”

“Your ‘negative’ post on ‘content system metadata’ would align with 19% negative, 25% neutral and 63% positive posts on this topic.”

Besides putting my proposed post in context, I wouldn’t mind getting a sample of the most relevant posts so that I could potentially revise my post, add links, references, rebuttals, etc.

At some level this would be a form of assisted authoring. It wouldn’t have to be limited to blog posts. I’d like to do the same for content authored in an enterprise context. The reality that many reports, white papers and similar work products of knowledge workers duplicate things already available, but people generally don’t look. It’s easier to start typing, imagining your work to be original, than to look first if it’s already been done or if there is something close than you can build on.

Syndicated at http://conversations.opentext.com/

Google’s impact on enterprise content management

Without a doubt Google has had a huge impact on the enterprise perspective on content management (ECM).

The pluses and negatives were highlighted by two blog posts yesterday:

On the plus side, John Mancini of AIIM listed three, “fundamental assumptions about information management that affect the ECM industry,” in his “Googlization of Content” post:

  1. Ease of use. The simple search box has become the central metaphor for how difficult we think it ought to be to find information, regardless of whether we are in the consumer world or behind the firewall. This has changed the expectations of how we expect ECM solutions to work and how difficult they are to learn.
  2. Most everything they do is free…
  3. They have changed how we think about the “cloud.” Google has changed the nature of how we think about applications and how we think about where we store the information created by those applications. Sure, there are all sorts of security and retention and reliability issues to consider…”

On the negative side, Alan Pelz-Sharpe made a post today in CMS Watch titled, “Google – unsuitable for the enterprise”. Alan introduced his piece by saying:

For years now Google has played fast and loose with information confidentiality and privacy issues. As if further proof were needed, the PR disaster that is Buzz should be enough to firmly conclude that Google is not suitable for enterprise use-cases.” He went on to say, “It is inconceivable that enterprise-focused vendors… would ever contemplate the reckless move that Google undertook in deliberately exposing customers’ private information to all and sundry with Buzz.”

Google is a hugely successful company, and they are extremely profitable. However, they are not a software company. Fundamentally they are an advertising placement company and everything they do is motivated by maximizing advertising revenue, whether directly or indirectly. 99% of their revenue comes from advertising that pays for every cool project they do and every service they offer.

While Google services to consumers have no monetary charge, they are not free:

  • You agree to accept the presentation of advertisements when you use Google products and services; most people believe these to be easily ignored despite the evidence of their effectiveness.
  • More importantly, you agree to offer provide information about your interests, friends, browsing and search habits as payment-in-kind. Mostly people sort of know this, but don’t think about it. If you ask them whether they are concerned that Google has a record of every search they have ever performed, they start to get uncomfortable. I expect most of us have searched on terms, which taken out of context, would take a lot to ‘explain.’

While most consumers in democracies are currently cavalier about issues of their own privacy, enterprises most certainly are not. Indeed, the need for careful management of intellectual property, agreements, revenue analyses and a host of other enterprise activities captured in content is precisely why they buy ECM systems.

The furor over Buzz points out that Google did things first and foremost to further its own corporate goals, which clash with those of other enterprises.

In contrast, Google’s goals require it to align with user needs, especially for good interfaces. An easy-to-use interface encourages and sustains use. That ought to be obvious to everyone, but when the effects of the interface on usage are easily measureable and directly tied to revenue (as in the case of Google Search), it becomes blatantly and immediately evident. In contrast, the development of an interface for an enterprise software product may take place months or even years before the product is released. Even if detailed usability research is done with test users, and in-depth beta programs are employed, the quality and immediacy of the feedback is less.

Besides easy interfaces, enterprise content management users expect ‘Google-like’ search, and are disappointed. There are generally two reasons for this:

  • Search results have to be further processed to determine if a user can be presented with each ‘hit’ based on their permissions
    • Typically 70-90% of the total computational time for enterprise search is taken up by permission checking
  • Enterprises don’t invest as much in search infrastructure as they should if the rapid delivery of search results was seen as critical

The second point is probably more important than people admit. In my experience significant computational resources are not allocated to Search by IT departments. I suspect that they look at average resource utilization, not peak performance and the time to deliver results to users. To deliver the typical half second or less response that Google considers to be essential, hundreds of servers may be involved. I am not aware of any Enterprise that allocates even the same order of magnitude of resources to content searching, so inevitably users experience dramatically slower response times.

In summary, the alignment of optimal user experiences with Google’s need to place advertisements has advanced the standards of user interfaces and provided many ‘free’ services, but the clash of Google’s corporate goals with the goals of other corporations has shown that the enterprise content has value that is not likely to be traded.

Syndicated at http://conversations.opentext.com/

The ‘Second Coming’ of Renditions – Video

Long time ECM veterans will remember the concept of document rendition – a transformed alternative. I think we’ll see renditions again.A rendition is essentially another form of a specific version of a document. There are two common types of renditions based on format and content:

  1. The same information content as the original document, but a different file format
  • For example, a spreadsheet file can be renditioned as a PDF
  1. The same file format as the original document, but different content
  • For example, a MS PowerPoint Document written in English can have a rendition that is also a PowerPoint file, but whose content has been translated into French

Renditions for limited bandwidth in the 90’sIn the 1990’s, one of the common use cases was to deal with the limited bandwidth available at the time. It often took a long time to download and open a document just to see if it contained what you were looking for. Accordingly, Open Text Livelink automatically made HTML renditions of many common formats such as MS Word that were much smaller files and so could be downloaded much faster for quick review.I remember presenting the use case to customers: “If you want to look quickly at a file without opening the full thing…” Back then bandwidth was so limited it made sense. Now it seldom does, although there are specific use-cases like renditions that contain added content like secured signatures that still have value.Bandwidth issues are backBandwidth is becoming limiting again – not for ‘simple’ text documents, but for rich media files such as videos. In fact bandwidth issues are so acute that the shape of the Internet has changed radically in the last few years. The explosive growth of video sharing has lead to the rise of Content Delivery or Distribution Networks (CDN) such as Akamai Technologies, Limelight Networks, CDNetworks and Amazon CloudFront to enable effective distribution.Akamai recently claimed they handle around 20% or the Internet traffic by volume – most of this traffic is rich media which must be delivered very quickly as users expect pages to load extremely quickly even if they contain a video. A recent Forrester report says the expected threshold to load has become two seconds.For video files to be useful to end users they have to start to play almost instantly. This is usually achieved by:

  • Locating a copy in close network proximity to the end user
    • CDNs use many distributed sites around the ‘edge of the Cloud’ to ensure that is at least one site close to an end user preloaded with files that are expected to be required
  • Reducing the size of the video through transcoding and compression
  • Streaming – starting to play before all of the content is received

The increasing use of mobile devices with narrow and unstable bandwidth connections, and different format requirements creates further hurdles to serving users rapidly.Enterprise needsSo what about the enterprise or corporate user? Trained by the web, he/she expects to click on a link and have a video start playing within two seconds. But most internal ECM systems (e.g. for document management) are designed to download a complete file before it is available to the end user.A story – Here’s a scenario I experienced recently. A Finance department prepared a new expense form. To show staff how to use it they prepared a five minute video. The trouble was that their WMV format video was over 300MB. For most staff in a global company, especially remote staff, downloading a 300MB file to view it is just not practical. What Finance needed was to be able to upload the video, and have the system take care of making a rendition that was transcoded and compressed, made stream-able and hosted on a CDN.There are just too many manual steps and too many options for most newcomers to video creation. Systems should take care of most of those steps. And one excellent way to execute several steps is to have the ECM system create a rendition of a deposited video that contains embed code to start a player and stream video from a CDN. The consumer users can then simply click on the object name in their ECM system and a streamed video starts to play almost instantly – as they have come to expect with sites such as YouTube.So renditions have a place in the new enterprise again to deal with bandwidth limitations!Syndicated at http://conversations.opentext.com/

Really looking forward to Virtual Content World – other ways to be ‘virtual’

You’ve probably heard about the first Open Text Virtual Content World (www.opentext.com/virtualcw) on Tuesday 19 January 2010. Hopefully you can attend. I’ll certainly be there in a virtual sense. It’s not too late to register, and if you attended Content World 2008 you’ll have received a code promotional code for free registration.

For those who can’t attend, there is an even ‘more virtual’ and dare I say free option – watch the many postings on twitter and Facebook.

The twitter hashtag is #otvcw.

The volume of tweets will really pickup on Tuesday if the Content World 2009 experience is any guide, not just from the ‘official’ event twitter account (@OTContentWorld) but of course from other OT staff like me, and most importantly, customers.

It should be a great event. There as certainly been a lot of organizational activity. Colleagues have told me this virtual event has been as much work as an in-person one.

‘See’ you there!

Twitter: @MartinSS

Syndicated at http://conversations.opentext.com/

Some Thoughts on Effective Enterprise Taxonomies for Content

While discussions of content management are usually referred to in the context of the needs of enterprises (i.e. Enterprise Content Management, ECM) in reality most ECM deployments start at the departmental level. This is not bad – on the contrary departmental deployments typically address specific business needs thereby sharpening their focus to improve their chances of success. However, over time as an enterprise begins to deploy ECM technologies in many departments, the benefits of an enterprise strategy to support cross-enterprise deployment become apparent.