Monday 21 December 2009

The vote on the Service-Oriented Cloud Computing Infrastructure Project OpenGroup

I am seeing the topic of SOA to enable internal and external services to operate in Cloud infrastructures as a key topic particularly in the evolutions of SOA Design time artifacts (service contract, format protocols, portfolio) into Run time Service artifacts (API, Consumer/producer platform) that include the elastic environment specification and the specification form Security interoperability and a specification for dynamic binding and discovery among three key ones I can think of.

I have found in SOA projects that many where locked into a design time specification that was passed to developers who then figured out how to design and build the solution. Much of the web service or API design was either driven by a specific BPM style or Portlet style or more generally driven by a wrapper and service management focus. The production environment and network connectivity has typically been outside the span of a SOA project, typically putting non-functionals and production build and deployment into the area of existing infrastructure or new hardware investment to support a broad availability and utilization target. With active management and transaction level performance management in Service Management tools it is now possible to monitor and optimize individual web service calls and to fine tune network packets and database performance. This means that the goals of service oriented performance and QoS can potentially be modelled and delivered on a transaction by transaction basis.

The evolution of cloud based assets means the "up and down use" of application services can potentially reflect the real-time use of the IT services. The integration of SOA concepts with Cloud is a critical area to elaborate on how this can be done given the reality of the Cloud particularly in IaaS and PaaS is here already. An interesting area is the approach of Cloud vendors to adopt a RDF or own ontology to describe the metalanguage of the Cloud Infrastructure or to use more specific Hypervisor or API Oriented connection specifications.

One area I am hoping the SOA Cloud project will help is to understand how to design applications in an SOA style that could best use a Cloud Infrastructure Environment. My understanding for example in work I conducted with VMware last year on Vcloud and Vapps is that the direction is towards "infrastructure aware applications" as the deconstruction of application functionality is further redefined as types of logic and payload services that are virtualized and "call" cloud infrastructure resources as and when it needs it. Multiplicity functionality is a new concept that enables multiple SOA style services to run simultaneously to process multiple services and scenarios through a distributed cloud infrastructure.

A great example of this I have seen with INTEL Research work focusing on Mobile Cellphone Technology that moves high processing workloads onto a external cloud service provider and returns the result back to the mobile cellphone device. This in effect "virtualizes the CPU and memory power" of the mobile cellphone device to include the external cloud services. Suddenly the cloud makes possible to bring new services and power to a multitude of different devices.

I think the SOCC Infrastructure project will pull existing SOA artifacts and Cloud together and I hope will help define the extensions of SOA to embrace the powerful "infrastructure Services" enabled via cloud. While this area is still evolving in the Cloud Infrastructure and interoperability specification I think the design of a "Cloud contract" will be much enhanced by this project.

Saturday 1 August 2009

The 6 M’s of Cloud oriented services

Looking at aspects of cloud computing has brought in many different operating characteristics under the spot light. What architects refer to as the “ilities” of services and marketers the “messaging”; the resulting service levels and blurring of marketarhicture has caused some greyness around how to define operating features for a cloud environment.

Differentiation of the service providers is evolving together with new technology features starting to appear under a cloud oriented portfolio. Two or three camps are emerging between services to enable clouds (typically other providers cloud platforms) and providers of cloud services and platforms. An intervening debate currently is whether there is a third segment of the market that involves brokering and aggregating cloud services, term also seen as Orchestration in this space.

A fundamental question is how to move to offer cloud services that recognize the cost benefits of IT operations but also can affect and build the business services that business want to drive. I saw a great phase recently about IT Services stating “too much green field thinking in projects” as a cause for difficulty in IT service lifecycle management. Often the incumbent brown field operation and IT estate would not just go away and vanish and the project implementing new or enhanced systems and solutions acted in a separate fashion to the deployment environment view. This is great truism of IT in that many aspects of service needs to bridge between what is being build and ran in IT and how business uses and might want to change rapidly or strategically to build new business capabilities. Cloud computing if nothing else does commoditize aspects of the hardware and software and starts to enable business service centric design and consumption patterns based on business service levels and business level consumption.

I have mentioned the word cloud services as if this is a defined term of the industry when in fact it is still an evolving term. To pick one visible development in the US with the NIST they term a design and deployment taxonomy that included the terms IaaS, PaaS, SaaS and the delivery models described in public, community, private and hybrid clouds (http://csrc.nist.gov/groups/SNS/cloud-computing/index.html ). But these definitions are architectural and don’t fully describe how services operate across these tiers of technology or the virtual or physical placement of the hosting of these services. In short, how these cloud services are seen from the perspective of business services to business.

I think these are still being characterized by the design of boundary management between the APIs, the platforms and the participants involved (internal, external or a mixture of communities). SOA defined an IT centric state of services and took a path towards IT services enablement. The Service contracts defined are now potentially being broaden out into Cloud contracts that take onboard the aspects of

So what would be the features in such a cloud contract?

I see at least six component characteristics which push the thinking of business services through the use of cloud in what I term the 6 M’s of cloud oriented services: Multi-tasking, Multiplexing, Multiplicity , Multi-Tenancy, Multi-casting and Multi-key.

· Multi-tasking

o The term multi-tasking is used here to mean the feature of repurposing the IT assets and functions to the service to what is required at the time of use. Virtualization enables logical provisioning of IT services and the recognition of assets to mean different purposes.

· Multiplexing

o This is the balancing of workloads and performance based on the actual usage of the service and not the forecast (termed statistical Multiplexing in the University of California Berkley paper Above the clouds in Feb 2009 http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.html )

· Multiplicity

o This is an interesting feature that extends the work to do in more than one instance of that work. Or to put it another way it is possible to simultaneously run many workloads depending on the service needs. Selected tasks or complex processes can be moved to the cloud. This promotes the idea of not just performing specific tasks constrained by your current task but also the possibility to run other alternative tasks scenarios. This is a new way of thinking made possible by cloning instances and services in a on-demand elastic capacity environment.

· Multi-tenancy

o The tenancy of a service can be dedicated or part of a shared community environment. Multi-tenancy extends the definition of tenancy into a definition of the tenancy in one location that can be used and representing many tenant locations. The central idea is efficiency of specification and variations to support business services for many users.

· Multi-casting

o This feature is to some extent a feature of multi-tenancy at the network level and how services can be delivered in an efficient and enterprise perspective. Multi-casting is a network capability to delivery information to a group of nodes simultaneously in an efficient way that delivers messages once and copies to multiple destinations. Used in streaming media and other IP multicast IP routing this aspect of services needs to consider the boundary of cloud service delivery at the network that may involve 3rd party networks and cloud platforms.

· Multi-Keys

o In enterprise level services the need to address large groups of security policies and user groups is a feature that needs to consider the complexity of identity and authentication services. Public and private key encryption has aspects that need administration in the context of cloud services.

Friday 26 June 2009

The need for Multiplicity and maintaining encryption persistence – a mathematical milestone broken but not just yet

Interesting blog entry on the CCIF around IBM Solves Cryptographic Cloud Security.

http://www.elasticvapor.com/2009/06/ibm-solves-cryptographic-cloud-security.html

Reading further this discusses the topic of privacy homomorphism which focuses on one of the core issues of ensuring information privacy stored in a cloud environment and how to maintain data encryption while processing and storage in the cloud environment.

Much cloud debate has been on the subject of security access and transport of data into the cloud and then securing the information while it is held in the cloud environment. An issue often raised is when the data is decrypted at the point of use and hence a potential protection point problem where confidentially could be compromised.

While secure virtual machine containers and VPN tunnels can address the isolation issue of the cloud service as it is transported to the cloud environment; the basic problem of maintaining the encryption state of the data is when it has to be used and therefore open to view.

Alternative models of data and code obfuscation software promise a way of using the data but this has some observed limitations in many business scenarios that require strong encryption of sensitive data. A secondary issue is the additional layer of complexity that use of this approach can add to processing time and debugging.

The article indicates that IBM Researcher Craig Gentry using a mathematical method enables full interaction with encrypted data directly. This is a perfect scenario and a milestone in security computation but as flagged with various responses to the article in Forbes Magazine there is the addition of similar challenges as obfuscation algorithms in the overhead of processing. http://www.forbes.com/forbes/2009/0713/breakthroughs-privacy-super-secret-encryption.html

Multiplicity – building the capabilities of utility cloud services.

This suggests more than ever the need to build multiplicity into cloud services that enable partial or complete movement of workload processing into a cloud. This needs a step change in the way IT processes are designed into the architecture and delivered as a service. With increasingly complex processes such as language translation and security as in the example of encryption this increases the in-memory and computation workload.

Cloud is an evolving area which I see moving beyond the current utility services of Cloud which is now taking hold with strong and robust services in storage, compute and software as a service offerings becoming a reality.

A multiplicity strategy enables isolated processes to be moved to the cloud which will need service providers and cloud vendors to consider how to build added value services that better leverage infrastructure resources on-demand. The security encryption puzzle is just another barrier which shows that with ingenuity and innovation the walls of new technology adoption and be over come.

Friday 29 May 2009

Cloud Multiplicity - the multi-cloud

Reading a great book by George Reese, Cloud Application Architectures - O'Reilly 2009. Clearly a guy with real experience of the cloud as the founder of Valtira and latterly enStratus ( a competitor to RightScale. http://www.enstratus.com/page/1/blog.jsp )

Whats an eye opener is the perspectivies of reliability of virtual instances versus physical instances. He is quite direct in his statements that virtual instances are much less reliable than physical instances. While its a AWS perspective he states that EC2 instances are much less reliable and that design for failure is a critical step in cloud design. He advocates as I do a strong separation of presentation, business modeling, business logic, and data as per the MVC paradigm and is realistic in the focus on cluster technology in application and database server scaling design for cloud elasticity benefits.

He also has some useful definitions of security around the Network intrusion and Host intrusion Tools which are clearly a key enabler for Cloud services as he points out that physical network DMZs are not possible in many externalized cloud data centers. I like also his refreshing real world view of security evolution in standards lagging in the virtualizational separation that means that separate server partitioning is a false situation in logicalization of servers via VMs.

He provides a wake up call on the I/O performance of storage which is often not suitable for NR or real time applications and squarely in the batch temporal domain of service performance which places cloud as backup as a service and archive. Perhaps AWS recent view of accelerated network I/O services is a recognition of this and that network based cloud services and strategic alliances is a critical strategy for cloud performance.

The book also illustrates the performance issues over CPU intensive operations in the cloud and also the barriers of legacy investments to cloud adoption of IaaS (He also confirms its cheapers to extend the existing infrastructure) but confirms the loss of future strategy by these challenges. (as ratified with his conversations with James Urquhart http://www.cnet.com/profile/jamesurquhart/?tag=mncol;txt ) The ability to adopt different VM strategies for legacy apps modernizations will be a key driver to cloud integration which I see as being able to work at different levels of abstraction rather than just at the IaaS and PaaS tiers. Taking a classical orchestrator view of modernization and adoption will fail as it seeks to control IT estate too much whereas a multiple use case approach to on-demand services underpined by a range of evolving platforms for IaaS and PaaS will accelerate adoption.

I believe the emergence of multiplicity strategies to support statistical multiplexing patterns will drive realistic cloud adoption including virtualization patterns in :
  • Partitioning strategies for worloads to cloud operations
  • Clone strategies for backup, replication, intelligence extensions e.g. RT language translation and multiple process services for multiple parallel temporial services
  • Applicance strategies for application extentions via API
  • PaaS and IaaS integration e.g. GoogleApps and Force.com integration
  • Adoption of mainstream Service Management challenges with Social media systems e.g. the Twitter / Facebook as a "remedy channel" effect
The advocacy of using PaaS investments to support operational I/O and CPU performance will create the tipping point to develop PaaS and SaaS services. The security of public clouds do still have partitioning problems for geographic and national law compliance but as explained in the book the technical aspects are not impossible to replicate to bring the security to a level of a private data center in may respects.

Wednesday 6 May 2009

Cloud Clone Augmented Execution


Exploiting cloud clone augmentation

A very impressive white paper has come out from the Intel Berkley Lab from Byung-Gon Chun and Petros Maniatis titled  "Augmented Smartphone Applications Through Clone Execution".  as part of the Proceedings of the 12th Workshop on Hot Topics in Operating Systems (HotOS XII), May 2009.

http://berkeley.intel-research.net/bgchun/clonecloud-hotos09.pdf       


The paper explores a research topic of how to deploy workload intensive operations from a Smartphone platform into the cloud and return the results to exploit bursting to augment the mobile services. The project title is CloneCloud. 

http://berkeley.intel-research.net/bgchun/clonecloud/   


What is particularly interesting is the way a number of virtualization topics and augmented processes for workloads can be split between the cell phone and the cloud. The key point here is the separation and spread of workloads between different local and virtual platforms such that the computational and storage capabilities are leveraged as "one networked computer" service. Add to this augmented application services not covered in the article and you start to see a number of added value services in a business context. This approach is evident when in a recent analysis I completed of the types of cloud burst services, it is clearly not constrained to excess volume or low volume workload management but also the redirection of specific types of work loads to cloud facilities.


Another very interesting statement in the white paper has been "multiplicity"   a feature that has been in the scope of virtualization to optimize workloads but in the case of cloud computing services starts to create a number of very interesting possibilities hitherto considered as capacity resource constrained.

To quote the article:  


Use multiple copies of the  system image executed in different ways.  This can help run data parallel applications.  E.g. indexing for disjoint sets of images.  This can also help the application “see the future”, by exhaustively exploring all possible next steps within some small horizon.  To enable for scenario model checking such as in monte carlo simulation.


This is a different way of of not only considering the virtualized device cloned into the cloud but in replicating the machine image it is possible to create multiple parallel tasks and from there a range of new service augmentation possibility not envisioned by the initial invocation. As previously suggested, if you add augmented application services into this you start to see a wider set of added value services. 


What this says to me is that the on premise and off premise geographical distinction is wrong in the sense it is the device and the machine specific locations that are the real on and off premise locations.   It also supports the view that temporal transformation as seen in batch to near real time processing in traditional timeframes may now evolve into new temporal transgformations that operate beyond immediate time and create multiple versions of "parallel time services".  In a sense there are multiple arrows of time.What it does also suggest is that the concept of a cloud switch may involve a number of second level and higher tiers of event interaction and types of VM patterns than just the Hypervisor workload management.


A summary of the cloud workload distribution patterns described in the article are:


Primary functionality outsourcing
Computation hungry applications such as speech processing, video indexing, and super-resolution are auto0matically split from the user interface and other low processing within the Smartphone.
Background
Functionality that does not need to interact with users in short term time scale. E.g. virus scanning, indexing files for faster search, analyzing photos for common search, crawling news web pages,
Mainline
Sitting between the primary and background augmentation. The user may opt to run a particular application in a wrapped fashion, altering the method of execution  not the semantics. E.g. private-data leak detection (taint check an application or applcation group, fault tolerance  (e.g. use a multi-variant execution analysis to protect the application from transparent bugs), debugging (e.g. keep track dynamically of allocated memory  in the heap to catch memory leaks
Hardware
Compensation for Smartphone weaknesses e.g. memory caps or other constraints and hardware peculiarities
Multiplicity
Use multiple copies of the  system image executed in different ways.  This can help run data parallel applications.  E.g. indexing for disjoint sets of images.  This can also help the application “see the future”, by exhaustively exploring all possible next steps within some small horizon.  To enable for scenario model checking such as in monte carlo simulation.
 

Thursday 16 April 2009

Factors for consideration in the performance design of Cloud Infrastructures


The evidence of performance issues in today’s cloud are seen in the public domain through outages of service , loss of business through unavailable 3rd party online commerce payment services and even closure of site services with impact on the subscription customers data and business held in these sites. More mundane but nevertheless an issue is the slow performance of high usage on popular web sites and the “drag” factor on user latency time at the keyboard and mobile.  All these issues combine to create a less than satisfactory Quality Of Service QOS experience for users and providers alike. 

The introduction of cloud computing concepts introduces abstraction at the edge network and the technology tiers of infrastructure, applications and business process provisioning.  Often the argument made against federated service orientation is the increase in complexity of distributed IT services when the goal is in fact to reduce complexity and simplify the user experience of the service at the edge boundary.   Data, storage, servers, network and devices are all virtualized and aligned as a delivery framework that makes the selection, delivery and deployment of the user experience much more “plug and go”.     But there are factors that need to be considered in the compression of the time and cost of delivery set against the methods of abstracted and compartmentalizing a secure infrastructure to deliver the service.   

Amdahls' Law Rising

The experience of Amdahl’s law shows that the theoretical limits of compound use of more and more processes has a diminishing return on the performance enhancement of CPU processing speed.  But this effect is not restricted to processor design.  Other factors of overlaying additional abstraction tiers on top of physical and logical representation increases the number of addition computation and translation points of a system.  This is a complex issue of node count and stages of processing versus the use of appliance hardware and energy and space saving design which is at the forefront of new IT architecture and language design .  Much of the current research into standards for open VM containerization, federated presence location Identifiers and distributed storage standards such as DMTF OVF, IETF LISP, UDT / UPD and XMPP and of course the ubiquitous “standard Cloud API Adapter” are seeking to make this transition to a universal connection and service provision model.  

There are factors that need to be considered in performance design of the individual on demand services and the orchestration between services.  

  • Physical level
    • “Bare metal” and Virtualization of data base, storage cluster structures and servers
  • OS Level
    • The machine image and application containerization  and portability methods for replication, separation and transportability between distributed host environments
  • Message level and module level
    • The message standards granularity and API protocol selection for stateless and stateless communications
  • Process level
    • The virtual network overlay tunnel impact on performance ; the federated
  • Appliance level
    • The scale density and performance characteristics dependent on user systems awareness of the appliance
  • Device Level
    • The method of abstraction and degree of tight coupling to the host device and services 
  • Market Level
    • The commercial and contractual agility and effectiveness of the service and position in the industrial market supply chain

 

Tuesday 31 March 2009

Private and Public Cloud Outages and Performance










Some high profile reported outage examples of public cloud services. (Sources: blog reports and service monitoring sites).

·         Nov 26, 2007, Yahoo e-commerce services. Heavy online traffic affected half of 40,000 sites that subscribe to yahoo’s e-commerce service. The outage prevented sales from being compoleted on thousands of web sites that depend on the e-commerce service. Outage: approx 6 hours

·         Feb 11, 2008,  Saleforce.com , North American CRM servers, NA5, up and won for most of business day Feb 11. Due to software upgrade installed over weekend causing subsequent service degradation. Outage: 24 Hours

·         Feb 15, 2008  Amazon  S3/EC2  Outage: started at 4.30am to 7.00am (approx 2 hours). Affected many startup sites e.g. Twitter, SmugMug, 37Signals, AdaptiveBlue that use S3 to store data fro their websites.

·         Feb 19, 2008  Yahoo mail smtp. Outage: delays in smtp service estimate 24 hours

·         April 28, 2008 Amazon S3. Service authentication system overloaded with user requests. Outage: 3 hours.

·         Jul 20, 2008 Amazon S3  internal system problems causes S3 to be inaccessible for up to 8 hours. Outage:  5 hours 45 minutes

·         Jul 22 , 2008 Apple MobileMe launch. Mail server crash, some subscribers without email access for 5 days. Overall affecting less than 1% of customers have lost permanently some emails  sent between 18 July and 22 July.

·         Aug 6, Google Gmail, small  number of Apps premier users affecting some users 24 Hours

·         Aug 7, 2008  Citrix , GoToMeeting,GoToWebinar. Due to surge in demand Outage: a few hours. 

·         Aug 8, 2008  Nirvanix and MediaMax/The linkup(Storage). Cloud service failed and closed. Lost unspecified amount of customer data and approx 45% of all data stored. Linkup had about 20,000 paying subscribers.  The aim was to migrate to Nirvanix storage delivery network but only a partial migration was possible before closure.

·         Aug 12, 2008  Google Gmail. Users unable to access mail boxed as Gmail returned a “Temporary Error (502)”.  About 20 million users visit Gmail daily, with more than 100 million accounts in total.  Issue caused by a temporary outage in the contacts system used by Gmail which prevented Gmail from loading properly.  Outage: officially 1 hour 45 min (unofficial 2 hours)

·         Aug 15, 2008 Google Gmail, small  number of Apps premier users affecting some users 24 Hours

·         Aug 26, 2008  XCalibre flexiscale Cloud affecting many businesses using flexiscale on-demand storage, processing and /or network bandwidth. Cited as partly human error. The data structure was not replicated across multiple data centers.    Outage: 2-3 days

·         Jan 6, 2009, Saleforce.com, System wide outage. All Salesforce.com services across all regions were largely unavailable between 12:39pm and 1:17pm. Outage : approx 40 minutes.

·         Feb 24, 2009, Google gmail outage in America and Europe. Third outage in 6 months.  One blog estimate suggested 62 hrs in last 8 months calculating 99.2%, projecting to 99.4% in 12 months.  Outage: 2 hours 30 minutes.

·         March 10, 2009  Google Gmail small number of users affected. Gmail has approx 113 million users (comScore).  Outage: Partially fixed in a few hours but between 24 and 36 hours to restore all affected accounts.

The reduction and duration of frequency of outages has improved from two to three year ago during the start up phases of these services. The current performance should also bee seen in the light of the size of the user accounts that the large public vendors manage which far outnumber even large scale outsourcing and public infrastructure user groups that may be in the order of 100,000+ unique desktop users to 5-10 million subscriber accounts. 

  • Google Gmail has 113 million active accounts,  March 2009

  • Facebook has 175 million active accounts, March 2009

  • Amazon S3 stores more than 29 billion objects , October 2008

  •  Yahoo! Mail has 260 million users with a 67 Petabyte server in the California Region, March 2009

  • Myspace had 106 million accounts inn Sept 2006. Myspace was overtaken by its main competitor Facebook in April 2008

  • Twitter has 4-5 million users November 2008

  •  Apple intunes sold 9 billion songs, representing 70% worldwide digital sales, Jan 2009

  •  Yahoo! Websites receive 2.4 billion page hits per day in October 2007 

These statistics support the “wikinomics paradigm” of a huge online resource and user capacity in comparison to physical bricks and mortar storage and products range. The microeconomics and service design has significant economies of scale leverage.

 With the cloud and on-demand services becoming more visible in mainstream discussion these events will become more critical.  A  learning point from these public cloud failures is the need for transparent communications with the user groups. With the cloud service becoming more visible it is necessary to increase the level of communications on system status to the users in parallel with any system technology improvements.

Yet most proprietary system failures go unnoticed by all except those affected directly. In the cloud there is however more transparency and higher visibility of failure and downtime events.

Google has stated it guarantees corporate customers of google enterprise services will pay for use of Google Apps Premier Edition that Gmail will be available 99.9% of the time.  The 0.1% would be taken literally as 8.76 hours per year.   Google publishes a status dashboard:

Amazon has implemented availability zones and persistent storage and elastic IP addresses rather than static address to enable dynamic remapping on the fly to point to compute instances by the user rather than a Amazon data technician. Amazon have announced a S3 storage service with a 99.9% SLA availability back in October 2007.  Many companies are stated as using AWS to handle spike overflow called “cloud bursting”.

Saleforce.com publish a operating status dashboard for all its server groups globally. This also includes a maintenance schedule for planned downtime typically duration of 1 hour.

In conclusion you can draw at least three potential outcomes as next steps if cloud computing is to become enterprise level for public , private and hybrid combinations of cloud services.

·         Use cloud burst technology to hot box failure and continuity of cloud services

·         Accept existing public cloud service levels as these may be higher than your current service levels for a number of non-core or even core services.

·         Build a private cloud that has the elastic compute benefits of the cloud but is preserved and managed as a internal data center standard

Monday 23 March 2009

SASS - Short Attention Span Summary - a brief history of the Cloud

I recently was looking at the reviews of the “Day the earth stood still” film on Amazon as at the time of writing it is due to release to DVD and I can’t decide if this movie is a complete lemon or a subtle recasting of a classic with new ideas. It certainly took a hammering by the box office critics but some reviews including my own amateur effort saw some elements that were of merit.  The point I found more interesting however was a reviewer, pen name Amanda Richards who used the concept called SASS- Short Attention Span Summary. Apart from the prodigious output I thought the concept of SASS and the notable art of a short blog was a really useful aid.  So here goes with my SASS of “TheLong Tail”, ““Wikinomics”, “The Big Switch”, “The World is Flat”, “Does IT Matter” and “IT Doesn’t Matter , Business Processes do” in chronological order.  I thought I’d include all six as they appear to be the mainstream cloud background subject matter books currently.

The SASS of the cloud is:

  1. "Does IT Matter" - No
  2. "IT doesnt matter, but business processes do" - Yes , everything is about processes
  3. "Wikinomics" - but IT is everywhere and potential to change the way we do business
  4. "The Long Tail" - Yes and its changed the way products and services are created, provisioned and delivered
  5. "The world is flat" - yes and its now global and the west missed the significance of this shift
  6. "The Big Switch" - The significance is that we will have virtual businesses, centralized IT utility services - the cloud changes everything....

Read on

“Does IT Matter ?” Nicolas Carr, (HBR article published 2003)

  1.  Postulate: IT has become a commodity and wide spread and no longer provides business competitive advantage
  2. Companies spent billion of dollars on IT but have not seen real competitive advantage improvements.
  3. IT investments need to assess the role of IT on business and commerce to achieve the right focus on value and competitive advantage
  4. Strategic importance of IT is decreasing.

 “IT Doesn’t Matter , Business Processes do” , Howard Smith, Peter Fingar, Nicholas Carr, (Aug 2003)

  1. Postulate:  Business processes enable business competitive advantage. It is wrong and dangerous to ignore processes and the role of IT
  2. Michael Hammer’s 1990 article “Reengineering work” is a example of this
  3. Strategic importance of IT is actually increasing  e.g. the emergence of Web services enabling businesses to redesign and innovate new services. 
  4. A business process revolution will occur as businesses redefine the use of IT in the context of their operations and markets.

Wikinomics, Don Tapscott, Antony Williams 2006

  1. Postulate: Mass collaboration changes everything
  2. The perfect storm: internet, web 2.0 tools, collaborative platforms
  3. The emergence of peer production - prosumers
  4. The wisdom of crowds, acting globally, shared spaces – the world is your R&D department
  5. Open and free e.g. Open Source Ecosystem e.g. Linux spawned a multi-billion dollar ecosystem and changed balance of power in Software Industry
  6. Ideagoras – Marketplaces for ideas, innovations and skills. Engage and co-create- co-innovation – emergence and serendipity.
  7. Escalating scope and scale of resources applied to innovation means change can unfold more quickly. Getting the right ratio between internal and external innovation.
  8. The collaboration economy, the business web
  9. Lower barriers to monetizing co-creation and collaboration channels
  10. Managing complexity – a Darwinion approach.
  11. The rise of social computing, Enterprise 2.0, harnessing the power of wikinomics.
  12. Building critical mass, supply a platform for collaboration, people governance enablers, incentives, build trust, let the process evolve,  objectives, leadership, culture of collaborative mind.

“The Long Tail” , Chris Anderson 2006

  1. Postulate:  Size of potential market is sum of all participants
  2. Temporal competition and the back catalogue including niche products can all be provisioned
  3. End of the hit parade
  4. The power of free
  5. The tyranny of locality versus logistics anywhere
  6.  The economics of abundance: Acquisition costs DOWN, Average Sale price DOWN, Gross Margin UP
  7. Why?
  8. Democratization of tools of production, distribution, joining of supply and demand (Industrialization)
  9. The power of peer production and collective intelligence
  10. One size does fit all - The emergence of statistical multiplexing
  11. The aggregators emerge e.g.  Physical retailers à Hybrid retailers à Pure digital retailers
  12. The paradox (of the cloud) – Long tail drives a shift towards 101 tastes and choice; the ability to provision is king

“The World is Flat”, Thomas Friedman, 2006

  1. Postulate:  The global balance of economics have changed with a triple convergence of complementary goods, horizontal collaboration business models emergence, and opening up of eastern markets into a global market together with 10 flattener (Changers) of new open playing field.
  2. The perfect storm: Late 20th century investment in fiber optic cables between west and east; collaborative tools on the internet; economic reforms to enable eastern countries to enter and exploit technologies and services.
  3. The west were looking the wrong way while the east has caught up in China, India, former Soviet Union countries emergence and Asia pacific markets
  4. Ten flatteners: Collapse of the Berlin Wall; Netscape, Workflow, Open Sourcing, Outsourcing, Offshoring, Supply chaining, Insourcing (BPO e.g. UPS repairs Toshiba PCs), In-forming, “The Steroids”( personal mobile communications devices)
  5. Connection and Collaboration
  6. The 21st Century is flat  

“The Big Switch”, Nicholas Carr, 2008

  1.  Postulate: Business and society will fundamentally change to a virtual society creating fundamental shifts in people’s lives, markets, jobs and skills
  2. IT will become like the utility industry – the “electrification”, “the Electric Grid” of the IT industry
  3. The PC age giving way to the Utility age
  4. The internet will become a “world wide computer”
  5. The rise of the Google model will supersede the old ownership models ( the older Microsoft model – Bill Gates memo to employees Fall 2005 creates a path to the utility age)
  6. The Google model will create super data center services outmatching any company own computer investments.  All physical hardware will to “centralized” as commodity utility services.
  7. Virtualization will truly revolutionize all IT
  8. 3Tera’s software provides an example of what the future of computer business might look like: No physical products at all. Create virtual versions of their equipment of software and sell them as icons that plug into programs like AppLogic.
  9. Move to service based economies
  10. From the many to the few – consolidation and flushing out versus the power of the many and 101 choice
  11. Living in the Cloud
  12. Collaborative content
  13. Accelerating concentration of wealth in large businesses
  14. Clustering of like minds - a threat or a benefit?
  15. The threats to the internet
  16. The spiders web – zero privacy, you are what you do and say profiling.
  17. iGod - all Human knowledge, wisdom and interactions inscribed into the internet
  18. Impact of this on society and relationships. 

SoA Gods Kitchen


A colleague at work, Adam Philips produced a great picture concept for the description of SOA based on earlier work entitled “SOA Beef Bourguignon”.

I love the slide, its one of the best SOA slides I've seen in a long while, I can relate to that cook with the rolling pin! the use of the pictures to convey the concept of the tiers is very good.  Strictly the abstraction of logic and infrastructure does have more layers, the ingredients would include the hardware: servers, networks, storage, security etc so you could add another cylinder on the bottom called "Infrastructure" and add a pic like kitchen utensils...   you also can show the multiple form factors that can consume services such as a Mobile, laptop, netbook, mainframe terminal etc at the top of the picture.

This picture can be further enhanced with  SOA Governance and organisation and in particular the concept of an Service Inventory (Thomas Erl has this as a key feature of the SOA world.  see www.soapatterns.org   or  www.soapatterns.com  the latter has podcasts you can listen to.)  The analogy here is that it is the "Kitchen" in which the SOA environment / Ecosystem would exist.   The basic difference is that we need to build a logical Service contract library in design time and the run time expression of this is typically in a traditional SOA  the Registry and repository (can be multiple).  SOA does not need these strictly to work as you can still do loose coupling as with WOA  and REST and still recognise this as a SOA concept albeit a light weight implementation of it.

The title god’s kitchen is in reference to the trance music that has the same name sake and the myriad of music and many blended mixes that constantly remind me of variation.  This made me  recall an earlier YouTube SOA “hit” which used the concept of music orchestration in SOA and now we see new music, new orchestration and new kitchens emerging in the clouds.    


Wednesday 11 March 2009

Applying SOA to testing

Some thoughts on criteria for SOA enabled Testing.

Key principles:

  •  Definition of the types of service tested e.g.  technical, IT service, Business process service tests
  • Definition of the SOA Inventory concept and the Service Life cycle idea.
  • Integration of  functional and non-functional combined under the Service contract test strategies
  • Definition of the granularity og testing in SOA is a holistic testing approach
  • Definition of testing assurance and  continuous testing and the management of the Service life cycle of testing. 
  • separation and specific treatment of security testing notably around Security policy and approach particularly with external testing services.
  • The business operating models for testing supported: there are at least 4 different testing business models to give the client options all underpinned by a common process:
  • Testing onsite
  • Testing off-site
  • Testing 3rd parties via direct and proxy
  • Testing onsite via remote secure access

  • Testing for SOA governance.
  • Specific and quantified benefits of SOA approach to testing e.g. faster speed of test          service , 10 to 50% reduction in time to test etc, positioned in relation to pricing , billing           methods and benchmarks to competitors.

 

SOA Testing functional capability list

  •   Elastic provisioning of the Dev and Test environment on a Cloud Utility service (Infrastructure as a service
  • Ability to offer a collaborative test cycle with the users / customer them selves ( a core element of SOA and Service orientation is to design for SLA)
  • Security provision for the Testing to be either off premise or on premise inside client firewall where security permits
  • Test life cycle with a service contract as a unit of test specification  passing through a SOA style service lifecycle - cradle to grave.
  • Definition of the Service Contract in the test strategy and test specification process. In particular a Use case style Test template script style
  • Test assurance cycle - how the service orientation is to govern the state and performance of the system but some kind of assurance service where requested  (not guaranteed service SLA  but some kind of gold, silver bronze service management rating with ITIL links.) This may include continuous testing cycles for assurance.
  • Test estimating process on Function point analysis based on Service contract unit of estimate
  • Abstraction of service testing from business testing and IT testing. A founding principle of SOA is to "test the test" of the logical service as conformant to a SOA governance model ideally.
  • Configuration and version management testing of service versioning.  This may include generational testing and isomorphic architecture testing principles to test families of services or type of delivery.
  • Legacy testing (non SOA managed) versus SOA testing - the differences and how it is handled.
  • Storage of the nomenclature of test environments and results.  How is the Test spec/script and results stored for reuse?
  • Contractual penalties testing and how this is differentiated from ordinary project delivery requirements testing  which may or may not be penalties driven - this is not SOA specifically but to show how the SLA  non-functional features are built into the SOA testing - SOA is based on integrated functional services with non-functional metric characteristics e.g.  "this sales order process is a service that operates to these performance and volume characteristics."   In traditional testing the volume and system testing is separate from the functional testing which is erroneous.
  • Billing mechanisms for testing in SOA is based on per use, per hour or per Service Function point test.
  • 3rd party testing - how do we test multiple vendors and parties SOA solutions and conformance.
  • Simulation and other types of SOA accelerators.