Monday, May 25, 2009

VMware releases its 'cloud OS'

VMware has released the latest version of its core virtualisation platform, vSphere 4, claiming it acts as a cloud operating system to the datacentre.

The product, the renamed successor to VMware Infrastructure 3, was made generally available on Wednesday, having been announced in April. The company says vSphere 4 will allow companies to centrally manage servers, storage and networks in the datacentre as though they were a single pool of computing resource.

Potentially, the new platform can reduce capital and operational costs by up to 30 percent in consolidation ratios, and 20 percent in power consumption, according to VMware.

"The bottom line is that this will enable customers to remove a lot of unnecessary overhead costs in the datacentre, where they might currently only be using 10 percent of server capacity," said Fredrik Sjostedt, VMware's EMEA director of product marketing. "vSphere 4 operates within the datacentre like a normal OS on a single server, but it provides access to the entire infrastructure, from storage to memory and switches."

The key improvements in the latest version of the platform are improved scalability for large-scale environments, more sophisticated management tools to control security and service levels, and an increased flexibility for customers choosing heterogeneous hardware, software and service-delivery models.

One of the key new features of vSphere 4 is the additional functionality provided by the 'virtual distributed switch', says Sjostedt. Previously, virtualised servers each needed their own virtual switch, a complex and time-consuming process. With vSphere 4, it is possible to create a single virtual switch that can manage the entire infrastructure.

"What's interesting is that the physical networking team can now manage virtualised infrastructure for the first time, using the normal management tools. This makes the entire configuration process much easier, and reduces overall complexity," Sjostedt said.

VMware is building private rather than public clouds, but potentially organisations will be able to move virtual machines to third parties, according to Chris Ingle, a research analyst with IDC. "It's hard to measure who is ahead in the market, but VMware is showing some strong performance numbers with this release," Ingle said.

The most useful new feature in vSphere 4 for customers is likely to be the increased level of support for CPU, memory and storage, said Ingle. This may persuade enterprises that currently do not use virtualisation to make the leap, he said.

Thursday, May 21, 2009

Virgin Media trials 200Mbps broadband

Broadband speed freaks take note: cable broadband purveyor Virgin Media has upped the ante by announcing a trial of 200Mbps broadband - four times faster than its current fastest fat pipe service.

The company said it will use the trial to assess the commercial viability of deploying a 200Mbps service in the UK - and to investigate the kind of applications consumers could use regularly in such a speedy future.

Around 100 'pilot customers' will eventually be involved in the trial, which started last week in Ashford, Kent and will run for at least six months. The ISP claims it is the fastest implementation of DOCSIS3 technology (Data Over Cable Service Interface Specification) in the world - faster even than cable services in Japan and the US which have 160Mbps and 101Mbps respectively.

Possible next-gen broadband apps could include HD and 3D TV entertainment services, remote delivery of IT support to home users, videoconferencing and home surveillance, according to the ISP.

At the end of last year Virgin launched its current fastest 50Mbps service - still the fastest consumer broadband service available in the UK. However BT has been making noise on the speed front - pledging to roll out fibre to 10 million homes by 2012, enabling speeds of up to 100Mbps and opening up the possibility that Virgin could be lose its headline-speed crown.

Ian Fogg, principal analyst at Forrester Research, told silicon.com that while he doesn't believe Virgin will be offering a commercial 200Mbps service anytime soon the company is nevertheless firing "a shot across the bows" of DSL providers to let them know it has more to offer.

"Virgin Media is clearly positioning around the speed of its broadband service and they're looking to make hay while the speeds of their rivals using DSL are limited by the copper telephone line," said Fogg.

"Virgin Media are shaking up the UK broadband market. They're looking to increase [consumer] dissatisfaction with speed."

But the analyst said the trial is not just about posturing: "There's a genuine piece of work to be done here," he said, adding: "It's all very well testing something in the lab but actually giving even a small number of consumers this service into their homes will deliver different information, different feedback."

One area where the trial could well shine a light in Fogg's view are on "bottlenecks" lurking elsewhere in the network.

"You get a point where the web servers, the general speed of the internet becomes the bottleneck - not the connection into someone's house. And I would expect that 200Mbps would reveal those bottlenecks elsewhere," he said.

"How fast a particular service is depends on all sorts of things - the speed of the web server to deliver the webpage, the speed of the connection of that web server onto the internet, the connection across the internet, the connection through that internet service provider's network and then there's the connection into the house… and of course there's the connection inside the house."

Fogg added that wi-fi routers can't currently support 200Mbps - so wireless home networks would also constrain users' speed dreams.

Sunday, May 3, 2009

Is the Chip Canary Starting to Tweet Again?

Chip sales are up slightly for March, but it's too soon to pop the champagne corks.

Further evidence that a tech sector recovery, however weak, is beginning to take place came from the Semiconductor Industry Association, which reported that global chip sales rose 3.3 percent in March over February.

However, it's not party time just yet. Because January was such a disaster, and February not much better, the SIA reports Q1 2009 sales were $44 billion, down 30 percent from $62.8 billion in the first quarter of 2008, and down 15.7 percent from the fourth quarter 2008 sales of $52.2 billion.

The $14.7 billion in March, a modest uptick from the $14.2 billion in February, comes across all regions except Japan, which is being hit with a very hard economic downturn.

Chip sales are considered a leading indicator of the health of the tech sector, since everything is built on semiconductors. Every desktop, laptop, server or smartphone has a number of chips in it. If sales can continue to bounce back, that would be a leading indicator of recovery.

However, a number of market researchers, including SIA and iSuppli as well as Gartner, IDC and In-Stat have all said it would be years before the market recovers to 2007 levels.

The first hints of recovery came earlier this month when Intel (NASDAQ: INTC) chief executive Paul Otellini said during the company's conference call to discuss earnings that sales picked up during the quarter. By March, he said Intel was actually getting expedited orders, meaning customers were asking them to hurry with the delivery.

"The modest sequential rebound in worldwide sales in March suggests that demand has stabilized somewhat, albeit at substantially lower levels than last year," said SIA President George Scalise in a statement.

"There are some bright spots such as smartphones and netbook PCs, but there are no clear signs of early firming of demand in other major end markets such as automotive, corporate information technology, and consumer electronics," he added.
Scalise also said he expected global stimulus packages by various governments would begin to take effect beginning in 2010.

Intel Improves Power Management in Datacenters

New software gives a more accurate measure of power in use, and shifts it to where it is needed, so long as you buy a Nehalem server.

Intel has unveiled the Intel Data Center Manager, a software development kit for monitoring power consumption on datacenter servers and adjusting power consumption on an as-needed basis.

The caveat? The servers have to utilize Intel's Intelligent Power Node Manager embedded in Intel's newly launched Xeon 5500 "Nehalem" generation of server chips to fully use this software.

The software can monitor intelligent power supplies, but to get the full benefit of thermal and power management, the Intelligent Power Node Manager has to be present on the hardware and exposed.

The software, released Thursday, is designed to give a more accurate look into the power being consumed in a server's cabinet, which can hold a number of racks or blades. The problem, as Jon Khazam, vice president and general manager of the manageability and middleware division at Intel (NASDAQ: INTC) noted, is that people build to the wrong specs.Each cabinet has a few power supplies, as much for redundancy as for powering the racks. If a cabinet has, for example, three power supplies with a combined 6,000 kilowatts, then the cabinet won't be populated with anything beyond 6,000 kilowatts total power draw, and usually less. The maximum power is listed on the power supply's nameplate.

The problem is people build to the maximum spec. If a rack-mounted server says it has a 300 kilowatt draw, then the most racks going into that cabinet is 20. The problem is, Khazam noted, servers don't run at their maximum draw. They usually operate at some power level below that.

"There's been the general challenge of dealing with power and constraints of power," he told InternetNews.com. "They tend to design a datacenter at the maximum of the nameplate power spec, which ends up introducing a lot of overbuild into the datacenter, where there's a lot more capacity to handle servers than they end up deploying."

The Data Center Manager works in two areas; on the individual rack and on the datacenter at large. On an individual rack, it gives an accurate measure of the overall average power draw, not the maximum draw. It shows individual units and the total power draw in the entire cabinet. Administrators can then limit each rack's power draw. For example, a 300 watt max rack might be limited to 150 or 200 watts. This allows for more compute density to be added.

Power sharing that works

When one rack in the cabinet is running at the upper limits of its power capacity, the software can then examine the other racks, find ones that are drawing much less power than they have allocated, and give the power to the racks that need it.

In one example, Intel worked with Chinese search engine provider Baidu and found it could increase cabinet density by 20 to 40 percent just by populating the cabinet according to average draw, not theoretical maximum. The research is detailed in a Intel white paper.

On a larger scale, power management can be aggregated not just among individual racks in a cabinet but across the whole datacenter. Data Center Manager's console lets admins set thermal and power policies for all systems, so if there are computers needing more power while others are idle, the idle ones give up their available juice for the ones that need it.