HP has proclaimed that it will buy 12 years of wind power from SunEdison and use it to run a new data centre in Texas.
The firm’s embracing of the wind market follows similar commitments from Facebook, which is planning to run its newest centre, the fifth so far, on wind power alone.
HP said that the 12-year purchase agreement will provide 112MW of wind power sourced from SunEdison and its nearby facilities.
The company said that 112MW could power some 40,000 homes, and will save more than 340,000 tons of carbon dioxide every year.
HP added that the deal puts the firm well on the way to meeting its green goals this year, five years earlier than the 2020 previously stated.
The renewable energy purchase is a first for HP and will power the new 1.5 million square foot data centre in Texas.
“This agreement represents the latest step we are taking on HP’s journey to reduce our carbon footprint across our entire value chain, while creating a stronger, more resilient company and a sustainable world,” said Gabi Zedlmayer, vice president and chief progress officer for corporate affairs at HP.
“It’s an important milestone in driving HP Living Progress as we work to create a better future for everyone through our actions and innovations.”
SunEdison, which HP calls the “world’s largest renewable energy development company”, is predictably excited to be the provider chosen to put the wind up HP servers.
“Wind-generated electricity represents a good business opportunity for Texas and for HP,” said Paul Gaynor, executive vice president, Americas and EMEA, at SunEdison.
“By powering its data centres with renewable energy, HP is taking an important step toward a clean energy future while lowering operating costs.
“At the same time, HP’s commitment allows us to build this project which creates valuable local jobs and ensures Texan electricity customers get cost-effective energy.”
Oracle said weak sales of its traditional database software licenses were made worse by a strong US dollar lowered the value of foreign revenue.
Shares of Oracle, often seen as a barometer for the technology sector, fell 6 percent to $42.15 in extended trading after the company’s earnings report on Wednesday.
Shares of Microsoft and Salesforce.com, two of Oracle’s closest rivals, were close to unchanged.
Daniel Ives, an analyst at FBR Capital Markets said that this announcement speaks to the headwinds Oracle is seeing in the field as their legacy database business is seeing slowing growth.
It also shows that while Cloud business has seen pockets of strength it is not doing as well as many thought,
Oracle, like other established tech companies, is looking to move its business to the cloud-computing model, essentially providing services remotely via data centres rather than selling installed software.
The 38-year-old company has had some success with the cloud model, but is not moving fast enough to make up for declines in its traditional software sales.
Oracle, along with German rival SAP has been losing market share in customer relationship management software in recent years to Salesforce.com, which only offers cloud-based services.
Because of lower software sales and the strong dollar, Oracle’s net income fell to $2.76 billion, or 62 cents per share, in the fourth quarter ended May 31, from $3.65 billion, or 80 cents per share, a year earlier.
Revenue fell 5.4 percent to $10.71 billion. Revenue rose 3 percent on a constant currency basis. Analysts had expected revenue of $10.92 billion, on average.
Sales from Oracle’s cloud-computing software and platform service, an area keenly watched by investors, rose 29 percent to $416 million.
A SENIOR MANAGER at Red Hat has warned the community of the importance of ensuring that OpenStack users have sufficient, qualified support for their infrastructure.
Alessandro Perilli, general manager for cloud management strategy at Red Hat, made the point in a blog post this week entitled Beware scary OpenStack support.
“Enterprise-grade support for any open source project, and especially for one as complex as OpenStack, can be articulated through many dimensions. However, they are almost never part of the conversation until too late,” he wrote.
Perilli goes on to list six key dimensions that system administrators should be looking for: expertise in the underlying operating system; security response; certification and compliance; code indemnification; vertical consulting; and extended cloud management.
He warned that enterprises are in great danger if they don’t stick to well-established Linux distros with experienced knowledge bases.
|When your OpenStack vendor is using a Linux distribution that has been in the market for a very short period (i.e. one year), has no history of contribution to the Linux distribution of choice, and doesn’t even mention its Linux distribution of choice in its marketing materials, this spells scary enterprise support,” he said.
It seems like obvious advice, but Perilli pointed to several major organisations that have fallen foul of this, and the results can be devastating because of the numbers involved in rolling out such an infrastructure.
Other potential pitfalls in the list include vendors that “cannot back port and port a security fix to older and newer versions of OpenStack before it’s fixed in the trunk code”, “have no experience in the legal implications with open source licensing”, “only support their own hardware”, “have a consulting division that consists of four engineers across five continents”, “have a cloud management platform that cannot support side by side server virtualization, IaaS, and PaaS across private and public environments” and many more.
OpenStack is as vulnerable to problems as any other, but being open source means that anyone can offer contributions and anyone can offer themselves as a vendor, a consultant and a self-proclaimed expert.
A recent study found that a Red Hat proprietary solution was among the offerings that was still able to undercut an OpenStack rollout. Meanwhile, its Fedora open source operating system has just reached version 22.
Perilli concluded by saying: “Any OpenStack provider claiming to offer enterprise-grade support must excel in every one of those aforementioned dimensions, not just one of them.”
In other words, it’s not enough to claim to be an OpenStack expert. You have to talk the talk as well as walk the walk.
The Openstack Framework is rapidly maturing into a business IT platform that is ready for enterprise-grade deployment, according to firms involved in the OpenStack community, including Intel which announced a technology called Clear Containers to secure containerized apps.
The OpenStack Foundation lined up a succession of organisations and vendors at the first OpenStack Summit of 2015 that are working to improve the platform or are already successfully operating it.
Some are using it on massive scale. eBay disclosed that its infrastructure already contains over 300,000 processor cores managed by OpenStack.
The message from many of those using and helping to develop OpenStack is that the platform has come a long way since it started as a joint project between Nasa and Rackspace back in 2010, and has become stable and mature enough for production purposes in a wide variety of use cases.
However, there is still room for improvement, especially when it comes to areas like setting up and updating an OpenStack cloud, according to Imad Sousou, general manager of Intel’s Open Source Technology Centre.
“At Intel, we believe that software-defined infrastructure is the cornerstone of the modern data centre, and OpenStack is the cornerstone of software-defined infrastructure, but there is lot more work to do on it and a lot of sceptics out there,” he said.
Sousou compared OpenStack with Linux, which has taken 20 years or so to mature to the point where organisations can buy something like Red Hat Enterprise Linux which is easy to install and operate.
“We need to get to that level with OpenStack and software-defined infrastructure, and there is a lot of work going on in the community to get there,” he said.
Intel also detailed at the summit how the company is working to improve the security of containerised applications by using the VT-x extensions in its processors to enforce isolation between containers.
This is called Clear Containers, and is part of Intel’s Clear Linux, a lightweight operating system intended for data centre operations with technologies such as container platforms.
“Intel’s approach with Clear Containers offers enhanced protection using security rooted in hardware. By using virtualisation technology features [VT-x] embedded in the silicon, we can deliver the improved security and isolation advantages of virtualisation technology for a containerised application,” said Sousou.
In addition, Intel’s Clear Linux is able to launch a Clear Container in under 200ms, and able to run thousands of them on a single server node, according to Sousou.
Other firms discussing their involvement with OpenStack at the summit included Yahoo, which powers its online services with “hundreds of thousands” of servers managed by OpenStack.
US retail giant Walmart, meanwhile, disclosed that it has about 140,000 cores managed by OpenStack in the infrastructure used to operate its e-commerce platform.
“As production scenarios go, it doesn’t get much more serious than Walmart on Black Friday,” commented OpenStack Foundation executive director Jonathan Bryce.
The Openstack Foundation has announced new interoperability testing requirements for OpenStack-branded products and is claiming rapid adoption of the federated identity service introduced in the latest OpenStack release that makes it easier to combine private and public cloud resources.
Foundation executive director Jonathan Bryce said at the first OpenStack Summit event of 2015 that the vision for the OpenStack project was to create a “global footprint of interoperable clouds” that would enable users to seamlessly mix and match resources from their own data centre with those of public cloud providers, delivering a so-called hybrid cloud model.
To this end, Bryce announced new interoperability testing requirements for products that are branded as ‘OpenStack Powered’, including public cloud and hosted private cloud services as well as OpenStack distributions.
“This is a big milestone and introduces common code in every distribution that brands itself as OpenStack, and common APIs that have been tested and validated,” he said.
In practice, this means that, along with an OpenStack Powered logo, products will carry a badge to show certification.
This currently applies only to some of the platform’s core modules, such as Nova (compute), Swift (object storage), Keystone (identity service) and the Glance image service.
But it is intended as a guarantee to users that a certified product contains a set of core services consistent with all other OpenStack products that are similarly certified.
Vendors already offering certified products include HP, IBM, Rackspace, Red Hat, Suse and Canonical, but the list is set to expand this year.
“During 2015, this will go across all products that are OpenStack. You will be able to know what you are getting in an OpenStack Powered product, and you will be able to count on those as your solid foundation for cloud,” Bryce said.
Meanwhile, the Kilo release of OpenStack, available since last month, added the Keystone service as a fully integrated module for the first time.
Despite this, OpenStack said that over 30 products and services in the OpenStack application catalogue support federated identify as of today, and that many OpenStack cloud providers have committed to supporting it by the end of this year.
Together, these two announcements are significant for OpenStack’s hybrid cloud proposition, as they will make it much easier to link a customer’s private cloud resources with those of a public cloud provider.
OpenStack Powered certification means that users can count on a consistent environment across the two, while Keystone provides a common authentication system that can integrate with directory services such as LDAP.
One company already taking advantage of this is high-tech post-production firm DigitalFilm Tree which has been working with HP and hosted private cloud firm Bluebox to build a totally cloud-based production system for film and TV content.
The firm demonstrated at the summit how the system enables footage to be captured and uploaded to one cloud, then transferred to another cloud for processing.
Bryce explained that this is just one example of how OpenStack is driving new use cases and expanding what people can do across a variety of industries.
“Interoperability means you can share your cloud footprint. It shows the power of the ‘OpenStack planet’ we are trying to build,” he said.
451 Research has revealed that proprietary cloud offerings are currently more cost effective than OpenStack.
The Cloud Price Index showed that VMware, Red Hat and Microsoft all offer a better total cost of ownership (TCO) than OpenStack distributors.
The report blames the shortfall on a lack of skilled OpenStack engineers, leading to a high price for employing them.
Commercial solutions run at around $0.10 per virtual machine hour, compared with $0.08 for OpenStack, but going commercial is cheaper when labour and other external factors are taken into account.
The report claimed that enterprises could hire an extra three percent of staff for a commercial cloud rollout and still save money.
“Finding an OpenStack engineer is a tough and expensive task that is impacting today’s cloud-buying decisions,” said Dr Owen Rogers, senior analyst at 451 Research.
“Commercial offerings, OpenStack distributions and managed services all have their strengths and weaknesses, but the important factors are features, enterprise readiness and the availability of specialists who understand how to keep a deployment operational.
“Buyers need to balance all of these aspects with a long-term strategic view, as well as TCO, to determine the best course of action for their needs.”
Enterprises need to consider whether they may end up locked into a proprietary feature which could then go up in price, or whether features may become decommissioned over time.
451 Research believes that this TCO gulf will narrow in time as OpenStack matures and the talent pool grows.
The research also suggests that OpenStack can already provide a TCO advantage over DIY solutions with a tipping point where 45 percent of manpower is saved by doing so. The company believes that the ‘golden ratio’ is 250 virtual machines per engineer.
OpenStack’s next major release, Kilo has just been released, and Ubuntu and HP are the first iterations to incorporate it.
Red Hat and Ubuntu are major contributors to the OpenStack code, in addition to their proprietary products, along with HP as part of its Helion range.
As part of the announcement, Citrix said that products including NetScaler and XenServer will be coming to OpenStack.
Citrix has been a contributor to OpenStack for some time, but this sponsorship announcement sees the company ramping up its involvement and integrating its core product lines.
Klaus Oestermann, senior vice president and general manager of delivery networks at Citrix, said: “We’re pleased to formally sponsor the OpenStack Foundation to help drive cloud interoperability standards.
“Citrix products like NetScaler, through the recently announced NetScaler Control Centre, and XenServer are already integrated with OpenStack.
“Our move to support the OpenStack community reflects the great customer and partner demand for Citrix to bring the value of our cloud and networking infrastructure products to customers running OpenStack.”
Citrix already supports the Apache Software Foundation and the Linux Foundation, and has pledged to continue investing in Apache CloudStack and CloudPlatform in addition to its work with OpenStack.
Jonathan Bryce, executive director of the OpenStack Foundation, added: “Diversity and choice are two powerful drivers behind the success of OpenStack and the growing list of companies that have chosen OpenStack as their infrastructure platform.
“We’re glad to see Citrix become a corporate sponsor, and we look forward to the contributions they can bring to the community as it continues driving cloud infrastructure innovation and software maturity.”
Canonical announced on Tuesday that the 15.04 edition of Ubuntu OpenStack will be the first commercially available product to be based on OpenStack Kilo, which is due for release at the end of the month.
Early adopters will get the release candidate, and the full version will follow days after.
Citrix is joining the alliance at an interesting time. Earlier this year, it was revealed that HP has become the largest single contributor to the current OpenStack version, Juno, overtaking Red Hat.
A number of alliances are forming within the OpenStack community to try and gain the upper hand. HP has buddied up with telecoms companies including AT&T and BT, while Juniper and Mirantis have joined forces, though the latter has confirmed that this is not a snub to VMWare.
Citrix coming aboard with its existing ties to Apache and Linux seems to represent another example of the cross-pollination of the OpenStack movement across the industry, with companies clamoring to back it either as a first or second line of opportunity.
Red Hat has been telling everyone its plans to integrate the latest Linux 4.0 kernel into its products.
In a statement, a spokesman told us, “Red Hat’s upstream community projects will begin working with 4.0 almost immediately; in fact, Fedora 22 Alpha was based on the RC1 version of the 4.0 kernel.
“From a productization perspective, we will keep an eye on these integration efforts for possible inclusion into Red Hat’s enterprise portfolio.
“As with all of our enterprise-grade solutions, we provide stable, secure and hardened features, including the Linux kernel, to our customers – once we are certain that the next iterations of the Linux kernel, be it 4.0 or later, has the features and maturity that our customer base requires, we will begin packaging it into our enterprise portfolio with the intention of supporting it for 10 years, as we do with all of our products.”
Meanwhile, Canonical Head Honcho Mark Shuttleworth has confirmed that Linux Kernel 4.0 should be making its debut in Ubuntu products before the end of the year.
In an earlier note to The INQUIRER, Shuttleworth confirmed that the newly released kernel’s integration was “likely to be in this October release.”
The news follows the release of version 4.0 of the Linux kernel in a flurry of what T S Eliot would describe as “not with a bang but a whimper”.
Writing on the Linux Kernel Mailing List on Sunday afternoon, Linux overlord Linus Torvalds explained that the new version was being released according to schedule, rather than because of any dramatic improvements, and because of a lack of any specific reason not to.
“Linux 4.0 was a pretty small release in linux-next and in final size, although obviously ‘small’ is relative. It’s still over 10,000 non-merge commits. But we’ve definitely had bigger releases (and judging by linux-next v4.1 is going to be one of the bigger ones),” he said.
“Feature-wise, 4.0 doesn’t have all that much special. Much has been made of the new kernel patching infrastructure, but realistically that wasn’t the only reason for the version number change. We’ve had much bigger changes in other versions. So this is very much a ‘solid code progress’ release.”
Come to think of it, it is very unlikely that T S Eliot would ever have written about Linux kernels, but that’s not the point.
Torvalds, meanwhile, explained that he is happier with releasing to a schedule rather than because of any specific feature-related reason, although he does note that there have been four billion code commits, and Linux 3.0 was released after the two billion mark, so there’s a nice symmetry there.
In fact, back in 2011 the version numbering of the Linux kernel was a matter of some debate, and Torvalds’ lacklustre announcement seems to be pre-empting more of the same.
In a subsequent post Torvalds jokes, “the strongest argument for some people advocating 4.0 seems to have been a wish to see 4.1.15 – because ‘that was the version of Linux Skynet used for the T-800 Terminator.’”
Canonical and Ericsson have announced their arrival into the cloud telecoms market after signing a three-year collaboration to develop network Function Virtualization (NFV) products for software-defined communications networks.
The deal will see Ericsson deploying the Ubuntu Server operating system as the host for all its cloud offerings.
John Zannos, VP of cloud alliances and channels at Canonical, told The INQUIRER: “It’s actually a very exciting time to be alive, with the pace of change in the marketplace. As we move toward software-defined solutions more and more, we’re going to see the accelerating pace of change more than ever.”
By working together, the companies hope to drive adoption of NVF products and accelerate research.
The news comes just a day after Oracle and Intel announced a similar deal based on an Oracle hypervisor to control expansion and contraction of communication network nodes at an intelligent level.
As with that announcement, the Canonical-Ericsson arrangement is based on the interoperability provided by OpenStack, meaning that the alignment between the two projects is set to be much closer than one might expect.
“What is most exciting for us is not just the chance to work with Ericsson, which already carries nearly 40 percent of the world’s mobile traffic, but the opportunities that working together brings for us to take these concepts to the next level,” said Zannos.
Ubuntu is used in 80 percent of OpenStack cloud deployments worldwide. Using Ubuntu Server means that the partnership should be able to bring the newest ideas in open platform NVF.
“Our ability to offer scale-out solutions means that for the first time we can help meet the massive demand on telecoms in the future,” said Zannos.
“I don’t want to speculate on ‘infinite scalability’ because infinite is a pretty big number, but we’re certainly able to create solutions without the restraints of traditional hardware.”
The rollout of open platform NFV acts as a natural next step after the arrival of cloud communication. Virtualizing the workload of global communications, and reducing the natural lag of hardware controllers, allows providers to offer cheaper running costs, lower energy use and greater flexibility to grow and contract the network according to customer need.
Zannos added: “Organizations are struggling to keep pace with data, complexity, cost and compliance demands, so this partnership will help customers overcome many of these challenges.”
The Ericsson name disappeared from the consumer market after Sony acquired the joint Sony-Ericsson venture in 2012, but the Swedish company’s reach remains vast. A venture into virtual telecoms, alongside the biggest single Linux distribution, is bound to disrupt the market.
Ericsson recently became the latest company to join the alliance of Canonical’s Snappy Ubuntu Core for the Internet of Things.
Zannos also confirmed that there will be room for cross-fertilization between the two alliances in the coming months and years, particularly with the opportunities for the silent, seamless firmware upgrades that underpin the technology.
Oracle and Intel have teamed up for the first demonstration of carrier-grade network function virtualization (NFV), which will allow communication service providers to use a virtualized, software-defined model without degradation of service or reliability.
The Oracle-led project uses the Intel Open Network Platform (ONP) to create a robust service over NFV, using intelligent direction of software to create viable software-defined networking that replaces the clunky equipment still prevalent in even the most modern networks.
Barry Hill, Oracle’s global head of NFV, told The INQUIRER: “It gets us over one of those really big hurdles that the industry is desperately trying to overcome: ‘Why the heck have we been using this very tightly coupled hardware and software in the past if you can run the same thing on standard, generic, everyday hardware?’. The answer is, we’re not sure you can.
“What you’ve got to do is be smart about applying the right type and the right sort of capacity, which is different for each function in the chain that makes up a service.
“That’s about being intelligent with what you do, instead of making some broad statement about generic vanilla infrastructures plugged together. That’s just not going to work.”
Oracle’s answer is to use its Communications Network Service Orchestration Solution to control the OpenStack system and shrink and grow networks according to customer needs.
Use cases could be scaling out a carrier network for a rock festival, or transferring network priority to a disaster recovery site.
“Once you understand the extent of what we’ve actually done here, you start to realize just how big an announcement this is,” said Hill.
“On the fly, you’re suddenly able to make these custom network requirements instantly, just using off-the-shelf technology.”
The demonstration configuration optimizes the performance of an Intel Xeon E5-2600 v3 processor designed specifically for networking, and shows for the first time a software-defined solution which is comparable to the hardware-defined systems currently in use.
In other words, it can orchestrate services from the management and orchestration level right down to a single core of a single processor, and then hyperscale it using resource pools to mimic the specialized characteristics of a network appliance, such as a large memory page.
“It’s kind of like the effect that mobile had on fixed line networks back in the mid-nineties where the whole industry was disrupted by who was providing the technology, and what they were providing,” said Hill.
“Suddenly you went from 15-year business plans to five-year business plans. The impact of virtualization will have the same level of seismic change on the industry.”
Today’s announcement is fundamentally a proof-of-concept, but the technology that powers this kind of next-generation network is already evolving its way into networks.
Hill explained that carrier demand had led to the innovation. “The telecoms industry had a massive infrastructure that works at a very slow pace, at least in the past,” he said.
“However, this whole virtualization push has really been about the carriers, not the vendors, getting together and saying: ‘We need a different model’. So it’s actually quite advanced already.”
NFV appears to be the next gold rush area for enterprises, and other consortium are expected to make announcements about their own solutions within days.
The Oracle/Intel system is based around OpenStack, and the company is confident that it will be highly compatible with other systems.
The ‘Oracle Communications Network Service Orchestration Solution with Enhanced Platform Awareness using the Intel Open Network Platform’ – or OCNSOSWEPAUTIONP as we like to think of it – is currently on display at Oracle’s Industry Connect event in Washington DC.
The INQUIRER wonders whether there is any way the marketing department can come up with something a bit more catchy than OCNSOSWEPAUTIONP before it goes on open sale.
HP has announced its first off-the-shelf configured private cloud based on OpenStack and Cloud Foundry.
HP Helion Rack continues the Helion naming convention for HP’s cloud offerings, and will, it is hoped, help enterprise IT departments speed up cloud deployment by offering a solid template system and removing the months of design and build.
Helion Rack is a “complete” private cloud with integrated infrastructure-as-a-service and platform-as-a-service capabilities that mean it should be a breeze to get it working with cloud-dwelling apps.
“Enterprise customers are asking for private clouds that meet their security, reliability and performance requirements, while also providing the openness, flexibility and fast time-to-value they require,” said Bill Hilf, senior vice president of product management for HP Helion.
“HP Helion Rack offers an enterprise-class private cloud solution with integrated application lifecycle management, giving organisations the simplified cloud experience they want, with the control and performance they need.”
HP cites the key features of its product as rapid deployment, simplified management, easy scaling, workload flexibility, faster native-app development and, of course, the open architecture of OpenStack and Cloud Foundry, providing a vast support network for implementation, use cases and customisation.
The product is built on HP ProLiant DL servers, and is assembled by HP and configured with the HP Helion OpenStack and Development Platform. HP and its partners can then work alongside customers to find the best way to exploit the product knowing that it is up and running from day one.
HP Helion Rack will be available in April with prices varying by configuration. Finance is available for larger configurations.
Suse launched its own OpenStack Cloud 5 with Sahara data processing earlier this month, just one of many other implementations of OpenStack designed to help roll out the cloud revolution quickly to enterprises, but offering a complete 360 package is something that HP is pioneering.
The two companies have signed an engineering partnership that the companies believe will lead to a reliable, scalable software-defined networking solution.
Mirantis OpenStack will now inter-operate with Juniper Contrail Networking, as well as OpenContrail, an open source software-defined networking system.
The two companies have published a reference architecture for deploying and managing Juniper Contrail Networking with Mirantis OpenStack to simplify deployment and reduce the need for third-party involvement.
Based on OpenStack Juno, Mirantis OpenStack 6.0 will be enhanced by a Fuel plugin in the second quarter that will make it even easier to deploy large-scale clouds in house.
However, Mirantis has emphasized that the arrival of Juniper to the fold is not a snub to the recently constructed integration with VMware.
Nick Chase of Mirantis explained, “…with this Juniper integration, Mirantis will support BOTH VMware vCenter Server and VMware NSX AND Juniper Networks Contrail Networking. That means that even if they’ve got VMware in their environment, they can choose to use NSX or Contrail for their networking components.
“Of course, all of that begs the question, when should you use Juniper, and when should you use VMware? Like all great engineering questions, the answer is ‘it depends’. How you choose is going to be heavily influenced by your individual situation, and what you’re trying to achieve.”
Juniper outlined its goals for the tie-up as:
- Reduce cost by enabling service providers and IT administrators to easily embrace SDN and OpenStack technologies in their environments
- Remove the complexity of integrating networking technologies in OpenStack virtual data centres and clouds
- Increase the effectiveness of their operations with fully integrated management for the OpenStack and SDN environments through Fuel and Juniper Networks® Contrail SDN Controller
The company is keen to emphasise that this is not meant to be a middle finger at VMware, but rather a demonstration of the freedom of choice offered by open source software. However, it serves as another demonstration of how even the FOSS market is growing increasingly proprietary and competitive.
Canonical has announced a new version of the Ubuntu operating system designed to bring a united front to the Internet of Things (IoT), after a preview alpha was trialed late last year.
The super-stripped down, lightweight Snappy Ubuntu Core is designed to allow developers to create IoT applications quickly and easily and release them securely across the network.
This means that many devices with firmware that would have been unpatched after vulnerabilities such as Heartbleed can now be updated quickly, easily and silently.
Apps are at the heart of the infrastructure, with app store functionality able to offer off-the-peg firmware, applications and runtime libraries to help facilitate common standards across the IoT.
“We found that the IoT required a way of installing apps similar to the way you do on your phone,” Maarten Ectors, Ubuntu VP for the IoT, told The INQUIRER.
“Developers can have app stores for things that don’t have app stores today. That could be your vacuum cleaner, it could be your robot, it could be a drone.”
The company hopes that the future of robots will be a large part of the success of Snappy, and is working closely with a range of start-ups and Kickstarter projects to bring home automation and intelligent robotics to life.
“As people add more items and add complexity to their home networks, they want stuff to just work and to keep working, no matter what vulnerabilities we discover in the huge mountain of open source software that is powering all of it,” added Mark Williams, founder and guvnor of Ubuntu.
“Many of these items that you’ll be buying will be Ubuntu anyway, but Snappy will allow them to be fully robust, fully automated and fully secure.”
Ubuntu Core requires a tiny footprint. It can work with as little as 600MHz of processing power and 128MB of RAM, with suitable ARM processor baseboards starting at $35 retail.
Also x86 compatible, this flexibility means that the overall product could see IoT products being mass produced for matters of pennies.
Last year Broadcom offered a similar device called the Wiced Sense, a $20 kit aimed at helping to design IoT prototypes.
The first Snappy Ubuntu Core products are expected to be announced in the second quarter. Expect to see a lot of them on Christmas lists for 2015.
Ubuntu Snappy in cooperation with Microsoft Azure on Tuesday, the alpha preview of a minimalist Ubuntu Core virtual machine implementation for cloud deployments of Linux applications software running in Docker containers.
Canonical said: “Today we’re announcing ‘snappy’ Ubuntu Core, a new rendition of Ubuntu for the cloud with transactional updates.
“The snappy approach is faster, more reliable, and lets us provide stronger security guarantees for apps and users – that’s why we call them ‘snappy’ applications.”
Ubuntu Snappy is the Ubuntu Core Linux operating system along with atomic image updating for the operating system and applications software running in Docker containers.
“Ubuntu Core provides transactional updates with rigorous application isolation,” said Canonical and Ubuntu founder Mark Shuttleworth.
“This is the smallest, safest platform for Docker deployment ever, and with snappy packages, it’s completely extensible to all forms of container or service. We’re excited to unleash a new wave of developer innovation with snappy Ubuntu!”
Canonical explained that Snappy apps and Ubuntu Core can be upgraded atomically and rolled back if needed, which it described as a “bulletproof” approach to systems management that is ideal for container deployments.
“It’s called ‘transactional’ or ‘image-based’ systems management, and we’re delighted to make it available on every Ubuntu certified cloud,” the firm said.
Microsoft corporate VP Bob Kelly added: “Microsoft Azure provides an alpha preview trial hosting environment based on the Docker container framework. For Canonical, business partners are where you find them, we reckon.
“Microsoft loves Linux, and we’re excited to be the first cloud provider to offer a new rendition of one of the most popular Linux platforms in the rapidly growing Azure cloud.
“By delivering the new cloud-optimised Ubuntu Core image on Azure, we’re extending our first-class support for Linux and enabling freedom of choice so developers everywhere can innovate even faster.”
Docker CEO Ben Golub claimed that Docker’s transactional application delivery is shaping modern application development and DevOps practice, and that snappy Ubuntu brings the same transactional updates to the operating system itself.
“We’re delighted to see the Docker ecosystem expand with this exciting new platform,” he added.
Canonical released Ubuntu Server 14.10 with support for OpenStack cloud deployment in October.
HP has announced general availability of its Helion OpenStack cloud platform and Helion Development Platform based on Cloud Foundry.
The Helion portfolio was announced by HP earlier this year, when the firm disclosed that it was backing the OpenStack project as the foundation piece for its cloud strategy.
At the time, HP issued the HP Helion OpenStack Community edition for pilot deployments, and promised a full commercial release to follow, along with a developer platform based on the Cloud Foundry code.
HP revealed today that the commercial release of HP Helion OpenStack is now available as a fully supported product for customers looking to build their own on-premise infrastructure-as-a-service cloud, along with the HP Helion Development platform-as-a-service designed to run on top of it.
“We’ve now gone GA [general availability] on our first full commercial OpenStack product and actually started shipping it a couple of weeks ago, so we’re now open for business and we already have a number of customers that are using it for proof of concept,” HP’s CloudSystem director for EMEA, Paul Morgan, told The INQUIRER.
Like other OpenStack vendors, HP is offering more than just the bare OpenStack code. Its distribution is underpinned by a hardened version of HP Linux, and is integrated with other HP infrastructure and management tools, Morgan said.
“We’ve put in a ton of HP value add, so there’s a common look and feel across the different management layers, and we are supporting other elements of our cloud infrastructure software today, things like HP OneView, things like our Cloud Service Automation in CloudSystem,” he added.
The commercial Helion build has also been updated to include Juno, the latest version of the OpenStack framework released last week.
Likewise, the HP Helion Development Platform takes the open source Cloud Foundry platform and integrates it with HP’s OpenStack release to provide an environment for developers to build and deploy cloud-based applications and services.
HP also announced an optimised reference model for building a scalable object storage platform based on its OpenStack release.
HP Helion Content Depot is essentially a blueprint to allow organisations or service providers to put together a highly available, secure storage solution using HP ProLiant servers and HP Networking hardware, with access to storage provided via the standard OpenStack Swift application programming interfaces.
Morgan said that the most interest in this solution is likely to come from service providers looking to offer a cloud-based storage service, although enterprise customers may also deploy it internally.
“It’s completely customisable, so you might start off with half a petabyte, with the need to scale to maybe 2PB per year, and it is a certified and fully tested solution that takes all of the guesswork out of setting up this type of service,” he said.
Content Depot joins the recently announced HP Helion Continuity Services as one of the growing number of solutions that the firm aims to offer around its Helion platform, he explained. These will include point solutions aimed at solving specific customer needs.
The firm also last month started up its HP Helion OpenStack Professional Services division to help customers with consulting and deployment services to implement an OpenStack-based private cloud.
Pricing for HP Helion OpenStack comes in at $1,200 per server with 9×5 support for one year. Pricing for 24×7 support will be $2,200 per server per year.
“We see that is very competitively priced compared with what else is already out there,” Morgan said.