Tag: Architecture

Cloud Computing Defined

Welcome to the first installment of what will be an on-going series on Cloud Computing.  Everyone in the industry is talking about it and the media is awash with hype.  I’ll be taking a different approach by trying to bring some clarity and reason to help you make informed decisions in this area.

The place to start is with a definition of the term.  There are a wide variety of sources that attempt to define Cloud Computing, many with subtly different nuances, and often including benefits (e.g. flexibility, agility) that are potential outcomes from going Cloud, but certainly don’t belong as part of its definition.

I prefer the U. S. Department of Commerce National Institute of Standards and Technology (NIST) draft definition which is well-considered, well-written and carries the weight of a standards body behind it.  Their definition sets out five essential characteristics:

  1. On-demand self-service: This means that a service consumer can add or delete computing resources without needing someone working for the service provider to take some action to enable it.
  2. Broad network access: Note that the NIST definition does not specify that the network is the Internet (as some other definitions do).  This is necessary to allow for private clouds.  NIST goes on to say that cloud services use standard mechanisms to promote use by a wide variety of client devices.  It’s not clear to me that this should be a full-fledged requirement, but is certainly in keeping with the spirit of the cloud concept.  Once could imagine a cloud that uses custom protocols agreed to by a closed group of consumers, but perhaps the word “standard” still applies in that it would be a standard across the consumer group.
  3. Resource pooling: Also known as multi-tenancy, this characteristic requires that the cloud serve multiple consumers, and that the resources be dynamically assigned in response to changes in demand.  The definition goes on to say that there is also a sense of location independence in that the consumer has no control over the location where the computing takes place.  It is important to distinguish “control over” from “knowledge of”.  The consumer may well know which specific data centre the resources are running in, particularly in the case of a private cloud.  There may also be limitations for compliance or security purposes on where the resources can be drawn from.  The important point is that the consumer cannot pick and choose between resources of a particular class; they are assigned interchangeable resources by the provider, wherever they happen to reside, within the limits of the service agreement.
  4. Rapid elasticity: Capabilities need to be rapidly provisioned when demanded.  The definition does not specify how rapidly, but the intent is that it be in a matter of minutes at most.  The service must be able to scale up and back down in response to changes in demand at a rate that allows potentially unpredictably varying demands to be satisfied in real time.  Ideally, the scaling is automatic in response to demand changes, but need not be.  The definition then goes on to say that the resources often appear to be infinite to the consumer and can be provisioned in any quantity at any time.  This is of course not a rigid requirement.  A cloud service could put an upper bound on the resources a particular consumer could scale to, and all clouds ultimately have a fixed capacity, so this clearly falls in the “grand illusion” category.
  5. Measured service: The NIST definition specifies that cloud systems monitor and automatically optimize utilization of resources.  The definition does not specify the units of measurement, and in fact Amazon and Google’s cloud services meter and charge using very different models (in thumbnail, Amazon in terms of infrastructure resources and Google by page hits).  What is surprising is that the definition does not state that the consumer is charged in proportion to usage, which many definitions consider the most fundamental tenet of cloud computing.  The NIST definition allows a situation, for example, where several consumers (say, members of a trade organization) decide to fund and build a computing facility meeting the five requirements and share its use, but don’t charge back based on usage even though it were possible.

There’s a lot to like about the NIST definition and it is the one I’ll be using in subsequent articles.  We’ll be digging into what people and organizations are actually doing with cloud computing (without all the hype and hyperbole), and practical considerations for success from both the business and technical viewpoints.

Larry Simon is an IT strategist who has advises startups through to Fortune 500 corporations and government institutions on achieving maximum return from their information technology investments through sound governance and practices.

Should Business Strategy be Influenced by Technological Considerations?

Can business strategy be created in isolation of the technology considerations? There is a widespread belief in the Business Community that Business Strategy comes first and then technology follows in some way to support that business.

In my experience the common perception among organizations is that Business defines its strategy first and then technology enables the strategy.

Strategy Development Process:

In order to explore the role technology plays in shaping and supporting the business, let’s look at how strategies are developed.  There has been a significant amount of research done and published in understanding how strategies are developed.  Here are some relevant highlights.

There are two main dimensions to strategy development.

  1. Visionary thinking based on intuition, a sense, an ability to make bold predictions and define goals.
  2. Strategy development is largely based on scientific analysis, considering options and recommendations based on the analysis followed by implementation.
    • Strategic Analysis guided by scientific approach understanding your markets, competitors, value chain, bargaining power of the key stakeholders.  It also entails understanding the strengths and weaknesses of your organization against the opportunities and threat that the external environment presents
    • Strategy Formulation guided by analytical findings, alignment to the vision and overall goals of the organization to create a strategic road-map
    • Strategy Implementation is of course converting the strategy to real results by successfully implementing the strategy

It is the strategy development that is the focus of this article. Specifically, strategic analysis which then guides the strategy formulation and implementation.

Is there a place for technological consideration in strategic analysis? The answer is quite apparent as demonstrated through examples next.

Technological Influences on the Business Landscape

Examples of technologies that have had transformation impact on business value chain and have redefined markets and distribution channels are all around us.

The globalization phenomenon enabled by the internet is one of most profound. The Internet has impacted all the traditional dimensions of business strategy (reduction in barriers to entry, increased market size across the globe without limitations of geographic divide, increased competition etc.).

Financial services industry is a prime example of an industry where technology has transformed the value chain, redefined competitive forces and given the consumers tremendous amount of bargaining power.  Entry barrier have been declining, new competitor have emerged. Some financial products and services have become more transparent and commodities making the market more competitive. Internet as a tool to create a new service delivery channel (reduced channel costs, 24 by7 availability) has put pressure on the more traditional branch based channels. The resulting service delivery cost structure has changed. ING is operating on the model that bricks and mortar are not required to sell its banking products and services.

Healthcare value chain has been transformed by technological advances, linking healthcare records through electronic information exchange, diagnostic imaging from traditional film based to digital imaging has redefined the value chain and changed the balance of power between the suppliers, buyers not to mention the very nature of the products and services being delivered.

Retail Industry is another such example where technology has changed the business landscape.  Amazon’s strategic business model was completely defined by technology.

Relationship between Business and Technology

Given how profoundly technology has influenced our business and personal lives, it is hard to fathom how a successful business strategy can be defined without considering technological influences and enablers.  By creating a partnership between Business and Technology at the Strategy development stage, you are creating a strategy that is well formed and can maximize business value and competitive positioning by embedding technological considerations from the very start (and not an after thought!).

So why is it that there is a significant divide between the Business and Technology?  In subsequent articles, I will focus on why there is this barrier (real or perceived) that creates this divide between Business and Technology.

If you have examples to demonstrate the benefits of business/technology partnerships, please share your thoughts on this forum.

Job One in the upgrade of a major ECM system

“Upgrade it and they will run away!” is a risk scenario with any major upgrade of a business-critical, enterprise system, including an enterprise content management (ECM) system.

Often the people promoting an upgrade are technologists who are almost always ‘early adopters’, but many staff just want to get their job done and will often be confused by, resent or even resist changes – telling typical users that they will get a whole bunch of ‘cool, new features’ isn’t likely to make them enthusiasts.

Here’s a typical persona of such a user:

  • Doesn’t read corporate communications (newsletters, emails, etc.)
  • Doesn’t like technology
  • Couldn’t care less about the product or site provided it ‘works’
  • Just wants to ‘do their job’ without external disruption

One of the big challenges is to ensure that when such a persona comes to work on the Monday after a major upgrade that they don’t say, “What the *#% happened to the site,” especially when the interface has changed.

I’m struggling with these issues in advance of a major ECM system upgrade. The system is called Ollie and has been in production for 15 years. It now has over 5.5 million objects and 4,000 users – 93% of whom use the system every month. It’s actually the main internal Enterprise Library of Open Text and is pretty much an un-customized version of the product we sell now called Content Server.

  • Content Server version 10 is just about to be released. It is the latest iteration of a product first called Livelink, and provides the underlying shared services of the Open Text ECM Suite.

Without doubt the newer version provides a better, more modern interface that will be preferred by most users – once they learn what’s different and how to use it. I know most users will prefer it as it has undergone extensive usability testing – but I also know that you can’t please all of the people all of the time and most people don’t like surprises at work.

So ‘job one’ is to create a short, effective video that overcomes the shock of the unexpected, since no matter how good our communications strategy is, many people will be surprised. The video also has to smooth the way for further change, because while some of the benefits of the new version will be available on Day One, others depend on subsequent work by knowledge managers using new capabilities that become available after the upgrade.

The Myth of Real-time Collaborative Authoring

In the document management field there has been a succession of products designed to support users working on a document at the same time, even if they are in different locations. These products have failed. They have failed because people don’t work on documents together very often.

I wonder where the belief in concurrent creation of documents came from. In the physical world you seldom see people saying, “Come to my office and we’ll write a document together,” so why expect users to want to do it virtually?

Documents may well be created to summarize a brainstorming session or record the minutes of a general meeting, but the designated author usually ‘goes away’ to somewhere quiet to write the first draft.

Even in the review phase, reviewers independently make comments, suggestions and edits at different times. The author then pulls these together to make a revised version. Email is no different, especially since emails of any length are essentially documents.

Sure the stepwise, asynchronous approach to content authoring and review takes place over a longer period, but it actually makes best use of each participant’s time, and is therefore more efficient overall.

I started to think about this again with yesterday’s announcement that Google Wave will not be further developed (http://googleblog.blogspot.com/2010/08/update-on-google-wave.html). As the blog post says, Google Wave was, “…a web app for real time communication and collaboration”. For the purposes of this discussion let’s consider both collaboration and communication independently.

Collaboration in Authoring

A technical tour-de-force, Wave enabled users to see others changing content as they themselves changed it. Very cool, but actually disconcerting. I wouldn’t have wanted you to have watched me author this blog post, for several reasons:

  • I’m easily distracted and need to concentrate to develop some cohesive thoughts
  • While writing I jump around adding sections, changing others, moving text blocks – it would be hard to follow and I’d have to explain what I was doing which would further slow me down and distract me
  • I’m the World’s worst typist

I’m probably no different than most people, at least regarding the first two points. And perhaps more lethal to the concept of concurrent authoring:

  • You’d get bored – it takes far longer to author a document than read it, and you’d probably want to be doing something else while I work, preferring to comment on my finished work

And that’s the crux of the matter – most people are busy, with many demands on their time, and collaborative authoring is just too inefficient.

Communication Delays are Good

While Wave was designed for collaboration, it was also intended for communication (see quotation above). Essentially email and instant messaging rolled into one. But I think there is a problem there too – most people actually don’t want to use real-time communication!!

Many commentators have remarked on the tendency for young people to use their mobile phones for text messaging far more than as telephones. You’d think it would be easier to engage in a conversation by talk rather than typing, so why is texting preferred?

I think people prefer texting because it allows them to be engaged in many, independent conversations with different people. For this to work they need to be able to send and receive messages in real time, but also need an agreed expectation that replies may take several minutes. Awkward silences of several minutes on a phone aren’t agreeable, and since voice isn’t cached locally like a text message you have to listen to each voice channel concurrently – which isn’t practical.

Interestingly while they are short, both mobile text messages and instant messages (IM) are generally only sent when they are completed. It is usually enough to see that the recipient is typing (i.e. with Instant Messaging) or to just assume that they are (i.e. with texting). All stumbles, pauses, and corrections are not sent – but they were with Google Wave.

Summary

With small pieces of content: true real-time communication is often undesirable, with near real-time being better.

With larger pieces of content: collaborative authoring is best done asynchronously.

Collaborative authoring seems to be something that IT professionals believe will lead to greater efficiencies, while end users don’t have the time for it!

Google’s impact on enterprise content management

Without a doubt Google has had a huge impact on the enterprise perspective on content management (ECM).

The pluses and negatives were highlighted by two blog posts yesterday:

On the plus side, John Mancini of AIIM listed three, “fundamental assumptions about information management that affect the ECM industry,” in his “Googlization of Content” post:

  1. Ease of use. The simple search box has become the central metaphor for how difficult we think it ought to be to find information, regardless of whether we are in the consumer world or behind the firewall. This has changed the expectations of how we expect ECM solutions to work and how difficult they are to learn.
  2. Most everything they do is free…
  3. They have changed how we think about the “cloud.” Google has changed the nature of how we think about applications and how we think about where we store the information created by those applications. Sure, there are all sorts of security and retention and reliability issues to consider…”

On the negative side, Alan Pelz-Sharpe made a post today in CMS Watch titled, “Google – unsuitable for the enterprise”. Alan introduced his piece by saying:

For years now Google has played fast and loose with information confidentiality and privacy issues. As if further proof were needed, the PR disaster that is Buzz should be enough to firmly conclude that Google is not suitable for enterprise use-cases.” He went on to say, “It is inconceivable that enterprise-focused vendors… would ever contemplate the reckless move that Google undertook in deliberately exposing customers’ private information to all and sundry with Buzz.”

Google is a hugely successful company, and they are extremely profitable. However, they are not a software company. Fundamentally they are an advertising placement company and everything they do is motivated by maximizing advertising revenue, whether directly or indirectly. 99% of their revenue comes from advertising that pays for every cool project they do and every service they offer.

While Google services to consumers have no monetary charge, they are not free:

  • You agree to accept the presentation of advertisements when you use Google products and services; most people believe these to be easily ignored despite the evidence of their effectiveness.
  • More importantly, you agree to offer provide information about your interests, friends, browsing and search habits as payment-in-kind. Mostly people sort of know this, but don’t think about it. If you ask them whether they are concerned that Google has a record of every search they have ever performed, they start to get uncomfortable. I expect most of us have searched on terms, which taken out of context, would take a lot to ‘explain.’

While most consumers in democracies are currently cavalier about issues of their own privacy, enterprises most certainly are not. Indeed, the need for careful management of intellectual property, agreements, revenue analyses and a host of other enterprise activities captured in content is precisely why they buy ECM systems.

The furor over Buzz points out that Google did things first and foremost to further its own corporate goals, which clash with those of other enterprises.

In contrast, Google’s goals require it to align with user needs, especially for good interfaces. An easy-to-use interface encourages and sustains use. That ought to be obvious to everyone, but when the effects of the interface on usage are easily measureable and directly tied to revenue (as in the case of Google Search), it becomes blatantly and immediately evident. In contrast, the development of an interface for an enterprise software product may take place months or even years before the product is released. Even if detailed usability research is done with test users, and in-depth beta programs are employed, the quality and immediacy of the feedback is less.

Besides easy interfaces, enterprise content management users expect ‘Google-like’ search, and are disappointed. There are generally two reasons for this:

  • Search results have to be further processed to determine if a user can be presented with each ‘hit’ based on their permissions
    • Typically 70-90% of the total computational time for enterprise search is taken up by permission checking
  • Enterprises don’t invest as much in search infrastructure as they should if the rapid delivery of search results was seen as critical

The second point is probably more important than people admit. In my experience significant computational resources are not allocated to Search by IT departments. I suspect that they look at average resource utilization, not peak performance and the time to deliver results to users. To deliver the typical half second or less response that Google considers to be essential, hundreds of servers may be involved. I am not aware of any Enterprise that allocates even the same order of magnitude of resources to content searching, so inevitably users experience dramatically slower response times.

In summary, the alignment of optimal user experiences with Google’s need to place advertisements has advanced the standards of user interfaces and provided many ‘free’ services, but the clash of Google’s corporate goals with the goals of other corporations has shown that the enterprise content has value that is not likely to be traded.

Syndicated at http://conversations.opentext.com/

The biggest changes sneak up on you

Content management (ECM) systems can track everything that a user does. Usually this capability is seen in the context of compliance – you can answer the ‘who did what’ and ‘when did they do it’ questions. You can also track changes in what users did over time. And so it is that a colleague was able to track how my behaviour has been changing without me noticing it by reporting on how many documents I deposit.

  • A bit of background: I use a number of Open Text Content Server systems. One of these, nicknamed Ollie, is used to support to support content-centric business processes within Open Text. I have authored many documents, mostly in MS Office formats over the course of my nine years with the company.

The ‘aha’ moment: So when my colleague made a social networking post that he had found that I had deposited almost 700 documents in Ollie, I wasn’t surprise. I was surprised though when he pointed out that I hadn’t added any documents in the last month! Zero! None!This of course got me to think: “What had I been doing?”He asked if I’d mostly moved to social networking-style tools. But no, I’ve been using collaborative tools of one form or another fairly consistently, and indeed heavily, over the last decade. What I realized as that I have almost entirely shifted to using wikis in place of documents.

  • A bit more explanation is in order. Content Server (formerly Livelink) is a full-featured ECM system. You can add documents of any type, including of course MS Office files. You can also directly author in wikis. On the collaborative/social networking side you can also post to a range of collaborative tools such as forums, discussions, news channels, blogs, etc., and with more recent additions instant messages, status posts, etc. So a user has a range of content and social tools in the same system to use – they can select whatever they feel most suited to the business task at hand. Given these choices you can then track user preference changes over time by analyzing audited events.

On further reflection it shouldn’t have been surprising. Once I used to have the Word and PowerPoint applications open all the time. I would typically send documents to colleagues as email attachments or via links to copies in Ollie.Now I create wiki pages and then rely on automated notifications and RSS for others to learn about them, and of course push awareness by targeted emails. I very seldom open Word to author content, and when I do I get frustrated because all of the embedded code makes it hard for me to reuse the content (unless I force Word to the blog posting mode as I’m doing now).That’s another thing – I repurpose content to multiple channels much more than I used to. I don’t simply author ‘free-standing’ documents and then deposit and email them. I often use the same content in several blogs and/or wikis.And now I’m starting to create short videos where once I’d have authored a document… None of this is surprising in an abstract sense. Pundits have been saying that there are huge changes underway and as someone who works in a company at the forefront of how content is managed in organizations, I’ve been aware of it and promoted it. I just hadn’t realized how much my own behaviour has changed; otherwise I wouldn’t have been surprised that I didn’t deposit a single document in Ollie last month!Syndicated at http://conversations.opentext.com/