Monthly Archives: October 2008

What cloud computing really means

Cloud computing is all the rage. “It’s become the phrase du jour,” says Gartner senior analyst Ben Pring, echoing many of his peers. The problem is that (as with Web 2.0) everyone seems to have a different definition.

As a metaphor for the Internet, “the cloud” is a familiar cliché, but when combined with “computing,” the meaning gets bigger and fuzzier. Some analysts and vendors define cloud computing narrowly as an updated version of utility computing: basically virtual servers available over the Internet. Others go very broad, arguing anything you consume outside the firewall is “in the cloud,” including conventional outsourcing.

Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT’s existing capabilities.

Cloud computing is at an early stage, with a motley crew of providers large and small delivering a slew of cloud-based services, from full-blown applications to storage services to spam filtering. Yes, utility-style infrastructure providers are part of the mix, but so are SaaS (software as a service) providers such as Salesforce.com. Today, for the most part, IT must plug into cloud-based services individually, but cloud computing aggregators and integrators are already emerging.

1. SaaS
This type of cloud computing delivers a single application through the browser to thousands of customers using a multitenant architecture. On the customer side, it means no upfront investment in servers or software licensing; on the provider side, with just one app to maintain, costs are low compared to conventional hosting. Salesforce.com is by far the best-known example among enterprise applications, but SaaS is also common for HR apps and has even worked its way up the food chain to ERP, with players such as Workday. And who could have predicted the sudden rise of SaaS “desktop” applications, such as Google Apps and Zoho Office?

2. Utility computing
The idea is not new, but this form of cloud computing is getting new life from Amazon.com, Sun, IBM, and others who now offer storage and virtual servers that IT can access on demand. Early enterprise adopters mainly use utility computing for supplemental, non-mission-critical needs, but one day, they may replace parts of the datacenter. Other providers offer solutions that help IT create virtual datacenters from commodity servers, such as 3Tera’s AppLogic and Cohesive Flexible Technologies’ Elastic Server on Demand. Liquid Computing’s LiquidQ offers similar capabilities, enabling IT to stitch together memory, I/O, storage, and computational capacity as a virtualized resource pool available over the network.

3. Web services in the cloud
Closely related to SaaS, Web service providers offer APIs that enable developers to exploit functionality over the Internet, rather than delivering full-blown applications. They range from providers offering discrete business services — such as Strike Iron and Xignite — to the full range of APIs offered by Google Maps, ADP payroll processing, the U.S. Postal Service, Bloomberg, and even conventional credit card processing services.

4. Platform as a service
Another SaaS variation, this form of cloud computing delivers development environments as a service. You build your own applications that run on the provider’s infrastructure and are delivered to your users via the Internet from the provider’s servers. Like Legos, these services are constrained by the vendor’s design and capabilities, so you don’t get complete freedom, but you do get predictability and pre-integration. Prime examples include Salesforce.com’s Force.com, Coghead and the new Google App Engine. For extremely lightweight development, cloud-based mashup platforms abound, such as Yahoo Pipes or Dapper.net.

5. MSP (managed service providers)
One of the oldest forms of cloud computing, a managed service is basically an application exposed to IT rather than to end-users, such as a virus scanning service for e-mail or an application monitoring service (which Mercury, among others, provides). Managed security services delivered by SecureWorks, IBM, and Verizon fall into this category, as do such cloud-based anti-spam services as Postini, recently acquired by Google. Other offerings include desktop management services, such as those offered by CenterBeam or Everdream.

6. Service commerce platforms
A hybrid of SaaS and MSP, this cloud computing service offers a service hub that users interact with. They’re most common in trading environments, such as expense management systems that allow users to order travel or secretarial services from a common platform that then coordinates the service delivery and pricing within the specifications set by the user. Think of it as an automated service bureau. Well-known examples include Rearden Commerce and Ariba.

7. Internet integration
The integration of cloud-based services is in its early days. OpSource, which mainly concerns itself with serving SaaS providers, recently introduced the OpSource Services Bus, which employs in-the-cloud integration technology from a little startup called Boomi. SaaS provider Workday recently acquired another player in this space, CapeClear, an ESB (enterprise service bus) provider that was edging toward b-to-b integration. Way ahead of its time, Grand Central — which wanted to be a universal “bus in the cloud” to connect SaaS providers and provide integrated solutions to customers — flamed out in 2005.

Today, with such cloud-based interconnection seldom in evidence, cloud computing might be more accurately described as “sky computing,” with many isolated clouds of services which IT customers must plug into individually. On the other hand, as virtualization and SOA permeate the enterprise, the idea of loosely coupled services running on an agile, scalable infrastructure should eventually make every enterprise a node in the cloud. It’s a long-running trend with a far-out horizon. But among big metatrends, cloud computing is the hardest one to argue with in the long term

What is Cloud Computing?

Cloud computing is Internet-based (“cloud”) development and use of computer technology (“computing”).

The cloud is a metaphor for the Internet (based on how it is depicted in computer network diagrams) and is an abstraction for the complex infrastructure it conceals.[1] It is a style of computing in which IT-related capabilities are provided “as a service”,[2] allowing users to access technology-enabled services from the Internet (“in the cloud”)[3] without knowledge of, expertise with, or control over the technology infrastructure that supports them.[4] According to a 2008 paper published by IEEE Internet Computing “Cloud Computing is a paradigm in which information is permanently stored in servers on the Internet and cached temporarily on clients that include desktops, entertainment centers, table computers, notebooks, wall computers, handhelds, sensors, monitors, etc.”[5]

Cloud computing is a general concept that incorporates software as a service (SaaS), Web 2.0 and other recent, well-known technology trends, in which the common theme is reliance on the Internet for satisfying the computing needs of the users. For example, Google Apps provides common business applications online that are accessed from a web browser, while the software and data are stored on the servers.

The next tech boom is already underway

Cloud computing has become a reality, yet the hype surrounding cloud has started to exceed the laws of physics and economics. The robust cloud (of all software on demand that will replace the enterprise data center) will crash into some of the same barriers and diseconomies that are facing enterprise IT today.

Certainly there will always be a business case for elements of cloud, from Google’s pre-enterprise applications to Amazon’s popular services and the powerhouse of CRM, HR and other popular cloud services. Yet there are substantial economic barriers to entry based on the nature of today’s static infrastructure.

We’ve seen this collision between new software demands and network infrastructure many times before, as it has powered generations of innovation around TCP/IP, network security and traffic management and optimization.

It has produced a lineup of successful public companies well positioned to lead the next tech boom, which may even be recession-proof. Cisco, F5 Networks, Riverbed and even VMware promise to benefit from this new infrastructure and the level of connectivity intelligence it promises. (More about these companies and others later in this article.)

Static Infrastructure meets Dynamic Systems and Endpoints

I recently wrote about clouds, networks and recessions by taking a macro perspective on the evolution of the network and a coming likely recession. I also cited virtualization security as an example of yet another big bounce between more robust systems and static infrastructure that has slowed technology adoption and created demands for newer and more sophisticated solutions.

I posited that VMware was a victim of expectations enabled by the promise of the virtualized data center muted with technological limitations its technology partners could not address quickly enough. Clearly the network infrastructure has to evolve to the next level and enable new economies of scale. And I think it will.

Until the current network evolves into a more dynamic infrastructure, all bets are off on the payoffs of pretty much every major IT initiative on the horizon today, including cost-cutting measures that would be employed in order to shrink operating costs without shrinking the network.

Automation and control has been both a key driver and a barrier for the adoption of new technology as well as an enterprise’s ability to monetize past investments. Increasingly complex networks are requiring escalating rates of manual intervention. This dynamic will have more impact on IT spending over the next five years than the global recession, because automation is often the best answer to the productivity and expense challenge.

Networks Frequently Start with Reliance on Manual Labor

Decades ago the world’s first telecom networks were simple and fairly manageable, at least by today’s standards. The population of people who had telephones was lower than the population of people who today have their own blogs. Neighborhoods were also very stable and operators often personally knew many of the people they were connecting.

Those days of course are long gone, and human operators are today only involved in exceptional cases and highly-automated fee-based lookup services. The Bell System eventually automated the decisions made by those growing legions of operators, likely because scale and complexity were creating the diseconomies that larger enterprise networks are facing today. And these phone companies eventually grew into massive networks servicing more dynamic rates of change and ultimately new services. Automation was the best way to escape the escalating manual labor requirements of the growing communications network.

TCP/IP Déjà vu

A very similar scenario is playing itself out in the TCP/IP network as enterprise networks grow in size and complexity and begin handling traffic in between more dynamic systems and endpoints. The recent Computerworld survey (sponsored by Infoblox) shows larger networks paying a higher IPAM price per IP address than smaller networks. As I mentioned earlier at Archimedius, this shows clear evidence of networks growing into diseconomies of scale.

Acting on a hunch, I asked Computerworld to pull more data based on network size, and they were able to break their findings down into 3 network size categories: 1) under 1000 IP addresses; 2) 1k-10k IP addresses; and 3) more than 10k IP addresses. Because the survey was only based on about 200 interviews I couldn’t break the trends down any farther without taking some statistical leaps with small samples.

Consider what it takes to keep a device connected to an IP network and ensure that it’s always findable. First, it will need an unused IP address. In a 1.0 infrastructure administrators use spreadsheets to track used and available IPs and assign them to things that are “fixed”, like printers and servers.

In a 2.0 world servers are virtual and dynamic, and move around even more frequently than wireless laptops and phones. While the DHCP protocol can assign addresses dynamically – and lots of other configuration data too (like the address for critical infrastructure elements like the network gateway router, the DNS server, even device-specific configuration info, etc.), the pools of addresses handed out by DHCP have to be managed, and there are lots of reasons why admins need to know which device received a particular address – and applications need to able to reach devices by name (e.g. Windows host name) versus an IP address.

Perhaps it takes 30 minutes on average to find an address, allocate it, get a device configured, update the spreadsheet and update DNS. That was more manageable in a static world, though the increasing cost/IP to perform these tasks in larger networks is a direct consequence of manual systems breaking down in the face of scale. Now consider a 30 minute process for a device – or a virtual application instance – that changes IPs every few hours, or faster. When a 1.0 infrastructure meets 2.0 requirements, things start to break pretty quickly.

That is why, even with the simple act of managing an enterprise network’s IP addresses, which is critical to the availability and proper functioning of the network, expense and labor requirements actually go up as IP addresses are added. As TCP/IP continues to spread and take productivity to new heights, management costs are already escalating.

This is a very fundamental observation based on one of the most common network management tasks. You can assume that there are other slopes even steeper because of complexity and reliance upon manual labor.

Some enterprises are already paying even higher expenses per IP address, and chances are they don’t even know it because these expenses are being hidden within network operations. Reducing headcounts risk increasing these costs further or making substantial sacrifices in network availability and flexibility.

IPAM as the Switchboard Metaphor

If something as simple and straightforward as IP address management doesn’t scale, imagine the impacts of more complex network management tasks, like those involved with consolidation, compliance, security, and virtualization. There are probably many other opportunities for automation tucked away within many IT departments in the mesh between static infrastructure and moving, dynamic systems and endpoints.

This will force enterprise IT departments into similar discussions as those which likely took place decades ago within the Bell System when telecom executives looked at the dramatic increase in the use and distribution of telephones and mushrooming requirements for operators and switchboards and offices and salaries and benefits. One can only imagine the costs and challenges that we would face today if the basic connection decisions were still made by a human operator.

The counterpart to the switchboard of yesteryear for IPAM is the spreadsheet of today. Networking pros in most enterprises manage IP addresses using “freeware” that has an ugly underside; it produces escalating hidden expenses that are only now being recognized, mostly by large enterprises. Mix the growth of the network with new dynamic applications and new factors of mobility with a little human error and you have a recipe for availability, security and TCO issues.

Many of these switchboards can probably be bought or manufactured today for a song, yet it is the other costs (TCO and availability and flexibility) which make them cost-prohibitive.

Server Déjà vu

Another one of the TCO fables that are similarly bound to take the steam out of cloud fantasies has to do with hardware expenses. The cloudplex will utilize racks of commodity servers populated with VMs that can scale up as needed in order to save electricity and make IT more flexible. That makes incredibly good sense, but are we really there yet? No.

Servers have a very large manual labor component, according to an IDC Report hosted at Microsoft.com. The drumbeat for real estate and electricity savings may play well to the bigger picture buyer; yet perhaps the real payoff of virtualization is its potential to automate manual tasks, like creating and moving a server on demand.

Just how many organizations have launched virtualization initiatives only to find out that they didn’t have the infrastructure to allow them to save electricity, real estate or people power? The network infrastructure simply wasn’t intelligent enough to enable anything more than virtualization-lite, because the links between the infrastructure and the software were still manually constrained.

Yet one of the core promises of virtualization is to automate the deployment of server power. If this is constrained by infrastructure1.0 (as I’m suggesting) then VMware and its partners need to address the “static infrastructure meets dynamic processing power” challenge rapidly in order to achieve levels of growth once expected in 2007. With Microsoft now in the virtualization market thanks to Hyper-V, VMware’s window of first mover advantage is starting to close.

Virtualization security now risks becoming a metaphor for other technology-related issues that could slow down the adoption of virtualization in the lucrative production data center market.

Netsec Wasn’t Ready for Virtsec

The lack of network security connectivity intelligence meant that security policy, for example, would limit VMotion to within hardware-centric hypervisor VLANs. Network security infrastructure wasn’t prepared for the challenges of protecting moving, state-changing servers, despite the promise of a stellar lineup of VMsafe partners.

The promise of virtualization that drove VMware’s stock price into the clouds eventually met up with lowered growth expectations as deployments were impacted by the lack of connectivity intelligence that no doubt impacted other potential business cases for the unquestionable power of virtualization to someday unleash new economies of scale and computing power. These issues too will hit the cloud dream as they have also impacted other initiatives, albeit on a smaller and less visible scale.

Today there are plenty of new initiatives facing mounting pressures for connectivity intelligence and automation that have already left enterprise CIOs holding the bag for similar ecosystem finger-pointing. Whether or not we enter a global recession, these pressures will continue and likely worsen. They are artifacts of years of application, network and endpoint intelligence promises colliding with static TCP/IP infrastructure.

Saving money by cutting network operations or capital budgets is the equivalent of Ma Bell laying off operators or closing switchboards in the midst of unstoppable growth. Automation is the only way out, as Cisco’s Chambers hinted recently.

Back to the Clouds and Virtualization

Cloud computing is dynamic computing power on a massive scale delivering new economies for IT services and applications. In between those economies and the prices existing enterprises are already paying for their own services is the business case, in addition to operations, sales, marketing, and new infrastructure requirements.

As much as cloud computing has rallied behind the prospect of electricity and real estate savings, the business case still feels like a dotcom hangover in some cases. Virtualization is still a bit hamstrung in the enterprise by the disconnect between static infrastructure and moving, state-changing VMs; and labor is the largest cost component of server TCO (IDC findings) and a significant component of network TCO (as suggested by the Computerworld findings). So just how much will real estate and electricity savings offset other diseconomies and barriers in the cloud game? I think cloud computing will also have to innovate in areas like automation and connectivity intelligence.

For the network to be dynamic, for example, it needs continuous, dynamic connectivity at the core network services level. Network, endpoint and application intelligence will all depend upon connectivity intelligence in order to evolve into dynamic, automated systems that don’t require escalating manual intervention in the face of network expansion and rising system and endpoint demands.

Getting beyond Infrastructure1.0’s Zero Sum Game

Whether you “cloudsource” or upsize your network to address any of a number of high level business initiatives the requirements for infrastructure2.0 will be the same. You can certainly get to virtualization and cloud (or consolidation or VoIP, etc) with a static infrastructure; you’ll just need more “operators”, more spreadsheets and other forms of manual labor. That means less flexibility, more downtime and higher TCO; and you’ll be going against the collective wisdom of decades of technologists and innovations.

This recession-proof dynamic gives the leaders in TCP/IP, netsec and traffic optimization an inherent advantage, if they can get the connectivity intelligence necessary to deliver dynamic services. They have the expertise to build intelligence into their gear as they have demonstrated. They just haven’t had the connectivity intelligence to deliver the dynamic infrastructure. Yet that is inevitable.

The Potential Leaders in Infrastructure2.0

Cisco is the leader in TCP/IP and has the most successful track record when it comes to executing in the enterprise IT market. Cisco has kept up with major innovations in security and traffic management as well, and it is likely to become a leader in Infrastructure2.0 as enterprises seek to boost productivity as their networks continues to become strategic to business advantage in an uncertain world economy.

F5 Networks has become the leader in application layer traffic management and optimization, thanks to its uncanny ability to monetize the enterprise web, or the enterprise initiative to deliver its core applications over the WAN and Internet. Their ability to merge load balancing with sophisticated application intelligence positions them to play an important role in the development of dynamic infrastructure.

Riverbed has come on the scene thanks to its ability to optimize a vast array of network protocols so that their customers could empower their branch offices like never before. While many tech leaders focused on the new data center, Riverbed achieved stellar growth by focusing on the branch office boom enabled by breakthroughs in traffic management and optimization. It was a smart call that has positioned Riverbed to be a leader in the emerging dynamic network.

Infoblox is the least known of the potential I2.0 leaders. It is a private company that already counts more than 20% of the Fortune 500 as customers. Its solutions automate core network services (including IPAM), enabling dynamic connectivity intelligence for TCP/IP networks. (Disclosure: I left virtualization security leader Blue Lane Technologies in July to join Infoblox, largely because of their legacy of revenue growth, sizable customer base and the promise of core network service automation.) Infoblox’s founder and CTO is also behind the IF-MAP standard, a new I2.0 protocol that holds promise as a key element for enabling dynamic exchange of intelligence among infrastructure, applications and endpoints (think MySpace for your infrastructure).

VMware is executing on the promise of production virtualization and clearly now has the most experience in addressing the challenges of integrating dynamic processing power with static infrastructure. I think the biggest challenges for VMware will be regarding how much it has to build or acquire in order to address these challenges. Not all of its technology partners are adequately prepared for the network demands of dynamic systems and endpoints. VMsafe was a big step forward on the marketing front, but partners have been slow to execute virtsec-ready products.

Google has no doubt benefited from the hype surrounding cloud computing. They’ve been investing in cloudplexes and new pre-enterprise cloud applications. While I do have reservations about their depth of infrastructure experience (versus the Nicholas Carr prediction of the eventual decline of enterprise IT) I think one would be hard pressed not to include them as a player driving requirements for a more dynamic infrastructure.

Microsoft has recently become more vocal on both virtualization and cloud fronts and has tremendous assets to force innovation in infrastructure, in the same way that its more powerful applications have influenced endpoint and server processing requirements. They are likely to play a similar role as the network becomes more strategic to the cloud.

There are no doubt other players (both public and private) that promise to play a strategic role in this next technology revolution, including those delivering more power, automation and specialization around network, endpoint and application intelligence as well as enabling more movement and control in virtual and cloud environments. All are welcome to join the conversation.

These leaders are well positioned to play a substantial part in the race to deliver Infrastructure2.0; and strategic enterprise networks promise to be big winners. The dynamic infrastructure will change the economics of the network by automating previously manual tasks and will unleash new potentials for application, endpoint and network intelligence. It will also play a major part in the success or failure of many leading networking and virtualization players as well as enterprise IT initiatives during periods of economic weakness and beyond. Infratsructure2.0 is the next technology boom. It is already underway.

CLOUD OPTIMIZATION

DCS understands: a data center facility and the nodes within form an intrinsically linked eco-system. Based on an analysis of each customer’s compute requirements and physical infrastructure, our experts optimize the data center environment, starting from the individual server component up to the facility level. The result is a harmonized solution that delivers excellent efficiency and performance yielding improved business results.

  • Engineered for lower costs through energy efficiency
  • Reduced network infrastructure requirements
  • Customized support for node, rack and data center level
  • Best thermal practices are applied

Planning for the data center
Planning for the data center is critical, and gaining an understanding of your precise needs allows Dell to tailor a solution specifically for your demands. After consulting with customers, current and future business needs are translated into optimized facility and system designs. Pinpointing the correct architecture and cooling capacity avoids costly over-shooting and delivers a solution that grows as you grow.

Industry roadmaps, partnerships and Dell’s engineering bench-strength in data center optimization allow our customers to deploy solutions that scale with their business needs. Services around data center design, layout, thermal flows and power optimization are combined with customized hardware to provide solutions that meet specific customer needs.

Azure

New “cloud-computing” platform, Windows Azure, gives Microsoft access to huge business market

After three years of work, Microsoft Chief Software Architect Ray Ozzie unveiled a wholesale change in strategy Monday.

The Redmond company whose mega-profits derive from software that runs on PCs and server computers, is launching a broad computing platform for a new era of anytime, anywhere access to applications and information over the Internet.

“It’s a complete re-helming of Microsoft’s strategy across the board,” said Lee Nicholls, who tracks Microsoft closely as global-solutions director at technology-services firm Getronics. “We’ve known that it had to happen because the world is evolving and the industry is changing.”

While consumers have enjoyed Web-based applications such as e-mail for several years, there’s a growing business movement toward the Web. In addition to using the Internet as a primary way to interact with customers, corporate technology managers are running more of their applications online — “cloud computing,” as it’s called — to save money and be flexible.

“[T]he systems that we’re building right now for cloud-based computing are setting the stage for the next 50 years of systems, both outside and inside the enterprise,” Ozzie said.

Elements of the platform, called Windows Azure, will compete directly with Amazon.com‘s Elastic Compute Cloud, giving Washington state two front-runners in a fast-growing and potentially enormous new market.

In addition to software development, the region is home to several of the large, power-hungry data centers on which this “cloud computing” model depends.

Ozzie acknowledged the work Amazon has done and other executives talked about opportunities for Microsoft and Amazon to work together.

“I’d like to tip my hat to Jeff Bezos and Amazon for their innovation and for the fact that across the industry all of us are going to be standing on their shoulders,” Ozzie said.

Microsoft is backing Windows Azure with billions of dollars in building and managing a global network of data centers to power its own Internet services, including Web-based e-mail, Internet search and high-traffic Web sites.

A few years ago, Ozzie assigned top technical talent to evaluate those services and build a common platform for them, as well as for the emerging online needs of customers.

In keeping with its traditional strategy of building broad platforms that serve the full spectrum of users, Windows Azure will serve individual consumers, software developers and corporations of all sizes.

advertising

<script language=”JavaScript” type=”text/javascript”>document.write(‘<a href=”http://clk.atdmt.com/APM/go/sttltdrv0010001389apm/direct;wi.300;hi.250/01/” target=”_blank”><img src=”http://view.atdmt.com/APM/view/sttltdrv0010001389apm/direct;wi.300;hi.250/01/”/></a>’);</script><noscript><a href=”http://clk.atdmt.com/APM/go/sttltdrv0010001389apm/direct;wi.300;hi.250/01/” target=”_blank”><img border=”0″ src=”http://view.atdmt.com/APM/view/sttltdrv0010001389apm/direct;wi.300;hi.250/01/” /></a></noscript>

On Azure, software developers can write Web-based applications using their existing skills, while saving costs and risks involved with the necessary infrastructure.

Microsoft will essentially rent space in one of its data centers; the developer won’t have to buy or manage that computing power itself.

A corporation could do the same thing with its applications. Many already are, for certain functions, such as e-mail and customer-relationship management.

Microsoft says it can operate data centers — sometimes called server farms — more efficiently than its customers. This saves customers the cost of hardware and software, as well as space, electricity and staff required to operate a data center.

These globally linked data centers also allow customers to quickly add capacity if demand for an application spikes, and is backup if natural disasters or other interruptions take an individual data center offline.

Jack Wilson, chief technology officer of Bellevue-based Laplink Software, said Azure may save companies like his from building their own infrastructure. It could also help the maker of PC connectivity and migration applications reduce development time.

“As a small company, it offers some nice advantages since we can add our products to these services quickly, therefore helping increase profits,”Wilson said in an e-mail.

There are potential environmental benefits, too.

“How inefficient is it to have 10 companies lined up down Main Street, all running their own server farms at 10 percent capacity?” said Kip Kniskern, a contributor to LiveSide.net, which tracks Microsoft’s online efforts. “We just can’t afford to do that anymore. We can’t afford it monetarily, but we can’t afford it ecologically as well.”

The exact business models — and profit margins — for cloud computing are still emerging. Microsoft is taking an intentionally conservative approach to rolling it out, starting with a technology preview.

And Microsoft is by no means abandoning its highly profitable business of selling software that companies run directly on their own hardware. It’s trying to distinguish itself by offering customers a choice.

Bob Muglia, senior vice president of Microsoft’s server and tools business, acknowledged in an interview that there could be some “revenue substitution” as customers choose a services offering to replace something they were doing with their own servers.

And while Microsoft’s existing online services for business are “very positive” in terms of revenue and profitability, building and operating data centers have a higher capital cost. The exact profit margin associated with services provided through Windows Azure is still to be determined.

“In the overall scheme of things, we still expect to have really good margins out of all of this,” Muglia said.

Cloud technology

On what’s missing in today’s technology revolution:

During the past decade, a dramatic transformation in the world of information technology has been taking shape. It’s a transformation that will change the way we experience the world and share our experiences with others. It’s a transformation in which the barriers between technologies will fall away so we can connect to people and information no matter where we are. It’s a transformation where new innovations will shorten the path from inspiration to accomplishment.
Many of the components of this transformation are already in place. Some have received a great deal of attention. “Cloud computing” that connects people to vast amounts of storage and computing power in massive datacenters is one example. Social networking sites that have changed the way people connect with family and friends is another.

Other components are so much a part of the inevitable march of progress that we take them for granted as soon as we start to use them: cell phones that double as digital cameras, large flat-screen PC monitors and HD TV screens, and hands-free digital car entertainment and navigation systems, to name just a few.

What’s missing is the ability to connect these components in a seamless continuum of information, communication, and computing that isn’t bounded by device or location. Today, some things that our intuition says should be simple still remain difficult, if not impossible. Why can’t we easily access the documents we create at work on our home PCs? Why isn’t all of the information that customers share with us available instantly in a single application? Why can’t we create calendars that automatically merge our schedules at work and home?

On the evolution of personal computing:

Ultimately, the reason to create a cloud services platform is to continue to enhance the value that computing delivers, whether it’s by improving productivity, making it easier to communicate with colleagues, or simplifying the way we access information and respond to changing business conditions
In the world of software plus services and cloud computing, this means extending the definition of personal computing beyond the PC to include the Web and an ever-growing array of devices. Our goal is to make the combination of PCs, mobile devices, and the Web something that is significantly than more the sum of its parts.

The starting point is to recognize the unique value of each part. The value of the PC lies in its computing power, its storage capacity, and its ability to help us be more productive and create and consume rich and complex documents and content.

For the Web, it’s the ability to bring together people, information, and services so we can connect, communicate, share, and transact with anyone, anywhere, at any time.

With the mobile phone and other devices, it’s the ability to take action spontaneously-to make a call, take a picture, or send a text message in the flow of our activities.

On the blend of computing services that companies will use:

Software plus services also recognizes that for most companies, the ideal way to build IT infrastructure is to find the right balance of applications that are run and managed within the organization and applications that are run and managed in the cloud.
This balance varies by company. A financial services company may choose to maintain customer records within its own datacenter to provide the extra layers of protection that it feels are needed to safeguard the privacy of personal information. It may outsource IT systems that provide basic capabilities such as email.

This balance will change over time within an organization, as well. A company may run its own online transaction system most of the year, but outsource for added capacity to meet extra demand during the holiday season. With software plus services, an organization can move applications back and forth between its own servers and the cloud quickly and smoothly.