The OpenStack Community is turning its attention to support for containers and improving the platform’s enterprise-worthiness, as the OpenStack Foundation celebrated gaining non-profit status from the US government, a move that will free up extra resources for development, the organisation said.
Foundation executive director Jonathan Bryce said at the OpenStack Silicon Valley conference at California’s Computer History Museum that OpenStack has developed over the past five years into a general-purpose “integration engine” for IT departments to build infrastructure that allows them to operate a diverse array of applications and services.
“OpenStack has become a framework for computing that lets you plug in commercial and open source options for virtualisation, storage and networking, which is a key benefit for users. What that points to is that OpenStack operates as an integration engine that can take different types of hardware and software, and integrate them into a unified platform that users can operate applications and services on top of,” he said.
Bryce announced that the OpenStack Foundation, which oversees the activities of the OpenStack developer community, has been officially recognised as a tax-exempt non-profit business by the US government.
“From a practical perspective, this means we will have more resources to invest in the community over the long term,” he said.
Bryce also announced the launch of a new App Dev section on the OpenStack.org website with resources to help developers make better use of the OpenStack APIs, including a whitepaper on containers.
Containers are the hot technology of the moment, as they hold the promise of packaging applications and services for easy deployment in the cloud, with greater density and scalability than using virtual machines. Much of the effort in the OpenStack community is thus now focused on making containers work without being too restrictive or tying users into one container platform or another.
Docker has garnered much publicity for its container technology, but successfully bringing containers to OpenStack involves more than just supporting Docker, as Craig McLuckie, group product manager for Google’s Compute Engine platform, explained.
“There needs to be something to map containers to your OpenStack infrastructure, the compute, storage and network resources, so that applications inside the containers can access these,” he said.
Naturally, McLuckie held up the Kubernetes project that Google founded as a key part of the solution, with other pieces supplied by OpenStack’s Magnum and the Murano project started by OpenStack firm Mirantis.
“Magnum adds Kubenetes to OpenStack, while Mirantis’ Murano provides native Kubernetes package integration,” McLuckie explained, but adding that there is still much work to be done on properly integrating containers into OpenStack.
“We need to work together as a community to ensure that the core service model can span virtual machines and containers, and we need better integration with the Neutron (networking) module and a solution for containers on bare metal,” he said.
“Virtual machines still have a future as they are the only way to achieve the isolation some applications and services need, but for many people containers are the way forward for most workloads.”
Intel has teamed up with OpenStack distribution provider Mirantis to push adoption of the OpenStack cloud computing framework.
The deal, which includes a $100m investment in Mirantis from Intel Capital, will provide technical collaboration between the two companies and look to strengthen the open source cloud project by speeding up the introduction of more enterprise features as well as services and support for customers.
The funding will also bring on board Goldman Sachs as an investor for the first time, the firm said, alongside collaboration from the companies’ engineers in the community on OpenStack high availability, storage, network integration and support for big data.
“Intel is actually providing us with cash, so they’ve bought a co-development subscription from us. Then, in addition, we’ve strengthened our balance sheet by putting more equity financing dollars into the company. So overall the total funds are at $100m,” said Mirantis president and co-founder Alex Freedland.
“With Intel as our partner, we’ll show the world that open design, open development and open licensing is the future of cloud infrastructure software. Mirantis’ goal is to make OpenStack the best way to deliver cloud software, surpassing any proprietary solutions.”
Freedland added that the collaboration means that there’s nothing proprietary in the arrangement that it is flowing directly into open source. No intellectual property is going to Intel.
“All this is community-driven, so everyone will be able to take advantage of it,” he added.
The move is part of the Cloud for All initiative announced by Intel in July.
Intel is becoming increasingly involved in OpenStack. The company said at the OpenStack Summit in May that it is making various contributions, including improving the security of containerised applications in the cloud using the VT-x extensions in Intel processors.
Other big companies are also backing the open source software. Google announced in July that it had joined the OpenStack Foundation as a corporate sponsor in a bid to promote open source and open cloud technologies.
Working closely with other members of the OpenStack community, Google said that the move will bring its expertise in containers and container management to OpenStack while sharing its work with innovative open source projects like Kubernetes.
HP Has released its financial results for the third quarter and they make for somewhat grim reading.
The company has seen drops in key parts of the business and an overall drop in GAAP net revenue of eight percent year on year to $25.3bn, compared with $27.6bn in 2014.
The company failed to meet its projected net earnings per share, which it had put at $0.50-$0.52, with an actual figure of $0.47.
The figures reflect a time of deep uncertainty at the company as it moves ever closer to its demerger into HP and Hewlett Packard Enterprise. The latter began filing registration documents in July to assert its existence as a separate entity, while the boards of both companies were announced two weeks ago.
Dell CEO Michael Dell slammed the move in an exclusive interview with The INQUIRER, saying he would never do the same to his company.
The big boss at HP remained upbeat, despite the drop in dividend against expectations. “HP delivered results in the third quarter that reflect very strong performance in our Enterprise Group and substantial progress in turning around Enterprise Services,” said Meg Whitman, chairman, president and chief executive of HP.
“I am very pleased that we have continued to deliver the results we said we would, while remaining on track to execute one of the largest and most complex separations ever undertaken.”
To which we have to ask: “Which figures were you looking at, lady?”
Breaking down the figures by business unit, Personal Systems revenue was down 13 percent year on year, while notebook sales fell three percent and desktops 20 percent.
Printing was down nine percent, but with a 17.8 percent operating margin. HP has been looking at initiatives to create loyalty among print users such as ink subscriptions.
The Enterprise Group, soon to be spun off, was up two percent year on year, but Business Critical system revenue dropped by 21 percent, cancelled out by networking revenue which climbed 22 percent.
Enterprise Services revenue dropped 11 percent with a six percent margin, while software dropped six percent with a 20.6 percent margin. Software-as-a-service revenue dropped by four percent.
HP Financial Services was down six percent, despite a two percent decrease in net portfolio assets and a two percent decrease in financing volume.
HP has proclaimed that it will buy 12 years of wind power from SunEdison and use it to run a new data centre in Texas.
The firm’s embracing of the wind market follows similar commitments from Facebook, which is planning to run its newest centre, the fifth so far, on wind power alone.
HP said that the 12-year purchase agreement will provide 112MW of wind power sourced from SunEdison and its nearby facilities.
The company said that 112MW could power some 40,000 homes, and will save more than 340,000 tons of carbon dioxide every year.
HP added that the deal puts the firm well on the way to meeting its green goals this year, five years earlier than the 2020 previously stated.
The renewable energy purchase is a first for HP and will power the new 1.5 million square foot data centre in Texas.
“This agreement represents the latest step we are taking on HP’s journey to reduce our carbon footprint across our entire value chain, while creating a stronger, more resilient company and a sustainable world,” said Gabi Zedlmayer, vice president and chief progress officer for corporate affairs at HP.
“It’s an important milestone in driving HP Living Progress as we work to create a better future for everyone through our actions and innovations.”
SunEdison, which HP calls the “world’s largest renewable energy development company”, is predictably excited to be the provider chosen to put the wind up HP servers.
“Wind-generated electricity represents a good business opportunity for Texas and for HP,” said Paul Gaynor, executive vice president, Americas and EMEA, at SunEdison.
“By powering its data centres with renewable energy, HP is taking an important step toward a clean energy future while lowering operating costs.
“At the same time, HP’s commitment allows us to build this project which creates valuable local jobs and ensures Texan electricity customers get cost-effective energy.”
Oracle said weak sales of its traditional database software licenses were made worse by a strong US dollar lowered the value of foreign revenue.
Shares of Oracle, often seen as a barometer for the technology sector, fell 6 percent to $42.15 in extended trading after the company’s earnings report on Wednesday.
Shares of Microsoft and Salesforce.com, two of Oracle’s closest rivals, were close to unchanged.
Daniel Ives, an analyst at FBR Capital Markets said that this announcement speaks to the headwinds Oracle is seeing in the field as their legacy database business is seeing slowing growth.
It also shows that while Cloud business has seen pockets of strength it is not doing as well as many thought,
Oracle, like other established tech companies, is looking to move its business to the cloud-computing model, essentially providing services remotely via data centres rather than selling installed software.
The 38-year-old company has had some success with the cloud model, but is not moving fast enough to make up for declines in its traditional software sales.
Oracle, along with German rival SAP has been losing market share in customer relationship management software in recent years to Salesforce.com, which only offers cloud-based services.
Because of lower software sales and the strong dollar, Oracle’s net income fell to $2.76 billion, or 62 cents per share, in the fourth quarter ended May 31, from $3.65 billion, or 80 cents per share, a year earlier.
Revenue fell 5.4 percent to $10.71 billion. Revenue rose 3 percent on a constant currency basis. Analysts had expected revenue of $10.92 billion, on average.
Sales from Oracle’s cloud-computing software and platform service, an area keenly watched by investors, rose 29 percent to $416 million.
A SENIOR MANAGER at Red Hat has warned the community of the importance of ensuring that OpenStack users have sufficient, qualified support for their infrastructure.
Alessandro Perilli, general manager for cloud management strategy at Red Hat, made the point in a blog post this week entitled Beware scary OpenStack support.
“Enterprise-grade support for any open source project, and especially for one as complex as OpenStack, can be articulated through many dimensions. However, they are almost never part of the conversation until too late,” he wrote.
Perilli goes on to list six key dimensions that system administrators should be looking for: expertise in the underlying operating system; security response; certification and compliance; code indemnification; vertical consulting; and extended cloud management.
He warned that enterprises are in great danger if they don’t stick to well-established Linux distros with experienced knowledge bases.
|When your OpenStack vendor is using a Linux distribution that has been in the market for a very short period (i.e. one year), has no history of contribution to the Linux distribution of choice, and doesn’t even mention its Linux distribution of choice in its marketing materials, this spells scary enterprise support,” he said.
It seems like obvious advice, but Perilli pointed to several major organisations that have fallen foul of this, and the results can be devastating because of the numbers involved in rolling out such an infrastructure.
Other potential pitfalls in the list include vendors that “cannot back port and port a security fix to older and newer versions of OpenStack before it’s fixed in the trunk code”, “have no experience in the legal implications with open source licensing”, “only support their own hardware”, “have a consulting division that consists of four engineers across five continents”, “have a cloud management platform that cannot support side by side server virtualization, IaaS, and PaaS across private and public environments” and many more.
OpenStack is as vulnerable to problems as any other, but being open source means that anyone can offer contributions and anyone can offer themselves as a vendor, a consultant and a self-proclaimed expert.
A recent study found that a Red Hat proprietary solution was among the offerings that was still able to undercut an OpenStack rollout. Meanwhile, its Fedora open source operating system has just reached version 22.
Perilli concluded by saying: “Any OpenStack provider claiming to offer enterprise-grade support must excel in every one of those aforementioned dimensions, not just one of them.”
In other words, it’s not enough to claim to be an OpenStack expert. You have to talk the talk as well as walk the walk.
The Openstack Framework is rapidly maturing into a business IT platform that is ready for enterprise-grade deployment, according to firms involved in the OpenStack community, including Intel which announced a technology called Clear Containers to secure containerized apps.
The OpenStack Foundation lined up a succession of organisations and vendors at the first OpenStack Summit of 2015 that are working to improve the platform or are already successfully operating it.
Some are using it on massive scale. eBay disclosed that its infrastructure already contains over 300,000 processor cores managed by OpenStack.
The message from many of those using and helping to develop OpenStack is that the platform has come a long way since it started as a joint project between Nasa and Rackspace back in 2010, and has become stable and mature enough for production purposes in a wide variety of use cases.
However, there is still room for improvement, especially when it comes to areas like setting up and updating an OpenStack cloud, according to Imad Sousou, general manager of Intel’s Open Source Technology Centre.
“At Intel, we believe that software-defined infrastructure is the cornerstone of the modern data centre, and OpenStack is the cornerstone of software-defined infrastructure, but there is lot more work to do on it and a lot of sceptics out there,” he said.
Sousou compared OpenStack with Linux, which has taken 20 years or so to mature to the point where organisations can buy something like Red Hat Enterprise Linux which is easy to install and operate.
“We need to get to that level with OpenStack and software-defined infrastructure, and there is a lot of work going on in the community to get there,” he said.
Intel also detailed at the summit how the company is working to improve the security of containerised applications by using the VT-x extensions in its processors to enforce isolation between containers.
This is called Clear Containers, and is part of Intel’s Clear Linux, a lightweight operating system intended for data centre operations with technologies such as container platforms.
“Intel’s approach with Clear Containers offers enhanced protection using security rooted in hardware. By using virtualisation technology features [VT-x] embedded in the silicon, we can deliver the improved security and isolation advantages of virtualisation technology for a containerised application,” said Sousou.
In addition, Intel’s Clear Linux is able to launch a Clear Container in under 200ms, and able to run thousands of them on a single server node, according to Sousou.
Other firms discussing their involvement with OpenStack at the summit included Yahoo, which powers its online services with “hundreds of thousands” of servers managed by OpenStack.
US retail giant Walmart, meanwhile, disclosed that it has about 140,000 cores managed by OpenStack in the infrastructure used to operate its e-commerce platform.
“As production scenarios go, it doesn’t get much more serious than Walmart on Black Friday,” commented OpenStack Foundation executive director Jonathan Bryce.
The Openstack Foundation has announced new interoperability testing requirements for OpenStack-branded products and is claiming rapid adoption of the federated identity service introduced in the latest OpenStack release that makes it easier to combine private and public cloud resources.
Foundation executive director Jonathan Bryce said at the first OpenStack Summit event of 2015 that the vision for the OpenStack project was to create a “global footprint of interoperable clouds” that would enable users to seamlessly mix and match resources from their own data centre with those of public cloud providers, delivering a so-called hybrid cloud model.
To this end, Bryce announced new interoperability testing requirements for products that are branded as ‘OpenStack Powered’, including public cloud and hosted private cloud services as well as OpenStack distributions.
“This is a big milestone and introduces common code in every distribution that brands itself as OpenStack, and common APIs that have been tested and validated,” he said.
In practice, this means that, along with an OpenStack Powered logo, products will carry a badge to show certification.
This currently applies only to some of the platform’s core modules, such as Nova (compute), Swift (object storage), Keystone (identity service) and the Glance image service.
But it is intended as a guarantee to users that a certified product contains a set of core services consistent with all other OpenStack products that are similarly certified.
Vendors already offering certified products include HP, IBM, Rackspace, Red Hat, Suse and Canonical, but the list is set to expand this year.
“During 2015, this will go across all products that are OpenStack. You will be able to know what you are getting in an OpenStack Powered product, and you will be able to count on those as your solid foundation for cloud,” Bryce said.
Meanwhile, the Kilo release of OpenStack, available since last month, added the Keystone service as a fully integrated module for the first time.
Despite this, OpenStack said that over 30 products and services in the OpenStack application catalogue support federated identify as of today, and that many OpenStack cloud providers have committed to supporting it by the end of this year.
Together, these two announcements are significant for OpenStack’s hybrid cloud proposition, as they will make it much easier to link a customer’s private cloud resources with those of a public cloud provider.
OpenStack Powered certification means that users can count on a consistent environment across the two, while Keystone provides a common authentication system that can integrate with directory services such as LDAP.
One company already taking advantage of this is high-tech post-production firm DigitalFilm Tree which has been working with HP and hosted private cloud firm Bluebox to build a totally cloud-based production system for film and TV content.
The firm demonstrated at the summit how the system enables footage to be captured and uploaded to one cloud, then transferred to another cloud for processing.
Bryce explained that this is just one example of how OpenStack is driving new use cases and expanding what people can do across a variety of industries.
“Interoperability means you can share your cloud footprint. It shows the power of the ‘OpenStack planet’ we are trying to build,” he said.
451 Research has revealed that proprietary cloud offerings are currently more cost effective than OpenStack.
The Cloud Price Index showed that VMware, Red Hat and Microsoft all offer a better total cost of ownership (TCO) than OpenStack distributors.
The report blames the shortfall on a lack of skilled OpenStack engineers, leading to a high price for employing them.
Commercial solutions run at around $0.10 per virtual machine hour, compared with $0.08 for OpenStack, but going commercial is cheaper when labour and other external factors are taken into account.
The report claimed that enterprises could hire an extra three percent of staff for a commercial cloud rollout and still save money.
“Finding an OpenStack engineer is a tough and expensive task that is impacting today’s cloud-buying decisions,” said Dr Owen Rogers, senior analyst at 451 Research.
“Commercial offerings, OpenStack distributions and managed services all have their strengths and weaknesses, but the important factors are features, enterprise readiness and the availability of specialists who understand how to keep a deployment operational.
“Buyers need to balance all of these aspects with a long-term strategic view, as well as TCO, to determine the best course of action for their needs.”
Enterprises need to consider whether they may end up locked into a proprietary feature which could then go up in price, or whether features may become decommissioned over time.
451 Research believes that this TCO gulf will narrow in time as OpenStack matures and the talent pool grows.
The research also suggests that OpenStack can already provide a TCO advantage over DIY solutions with a tipping point where 45 percent of manpower is saved by doing so. The company believes that the ‘golden ratio’ is 250 virtual machines per engineer.
OpenStack’s next major release, Kilo has just been released, and Ubuntu and HP are the first iterations to incorporate it.
Red Hat and Ubuntu are major contributors to the OpenStack code, in addition to their proprietary products, along with HP as part of its Helion range.
As part of the announcement, Citrix said that products including NetScaler and XenServer will be coming to OpenStack.
Citrix has been a contributor to OpenStack for some time, but this sponsorship announcement sees the company ramping up its involvement and integrating its core product lines.
Klaus Oestermann, senior vice president and general manager of delivery networks at Citrix, said: “We’re pleased to formally sponsor the OpenStack Foundation to help drive cloud interoperability standards.
“Citrix products like NetScaler, through the recently announced NetScaler Control Centre, and XenServer are already integrated with OpenStack.
“Our move to support the OpenStack community reflects the great customer and partner demand for Citrix to bring the value of our cloud and networking infrastructure products to customers running OpenStack.”
Citrix already supports the Apache Software Foundation and the Linux Foundation, and has pledged to continue investing in Apache CloudStack and CloudPlatform in addition to its work with OpenStack.
Jonathan Bryce, executive director of the OpenStack Foundation, added: “Diversity and choice are two powerful drivers behind the success of OpenStack and the growing list of companies that have chosen OpenStack as their infrastructure platform.
“We’re glad to see Citrix become a corporate sponsor, and we look forward to the contributions they can bring to the community as it continues driving cloud infrastructure innovation and software maturity.”
Canonical announced on Tuesday that the 15.04 edition of Ubuntu OpenStack will be the first commercially available product to be based on OpenStack Kilo, which is due for release at the end of the month.
Early adopters will get the release candidate, and the full version will follow days after.
Citrix is joining the alliance at an interesting time. Earlier this year, it was revealed that HP has become the largest single contributor to the current OpenStack version, Juno, overtaking Red Hat.
A number of alliances are forming within the OpenStack community to try and gain the upper hand. HP has buddied up with telecoms companies including AT&T and BT, while Juniper and Mirantis have joined forces, though the latter has confirmed that this is not a snub to VMWare.
Citrix coming aboard with its existing ties to Apache and Linux seems to represent another example of the cross-pollination of the OpenStack movement across the industry, with companies clamoring to back it either as a first or second line of opportunity.
Red Hat has been telling everyone its plans to integrate the latest Linux 4.0 kernel into its products.
In a statement, a spokesman told us, “Red Hat’s upstream community projects will begin working with 4.0 almost immediately; in fact, Fedora 22 Alpha was based on the RC1 version of the 4.0 kernel.
“From a productization perspective, we will keep an eye on these integration efforts for possible inclusion into Red Hat’s enterprise portfolio.
“As with all of our enterprise-grade solutions, we provide stable, secure and hardened features, including the Linux kernel, to our customers – once we are certain that the next iterations of the Linux kernel, be it 4.0 or later, has the features and maturity that our customer base requires, we will begin packaging it into our enterprise portfolio with the intention of supporting it for 10 years, as we do with all of our products.”
Meanwhile, Canonical Head Honcho Mark Shuttleworth has confirmed that Linux Kernel 4.0 should be making its debut in Ubuntu products before the end of the year.
In an earlier note to The INQUIRER, Shuttleworth confirmed that the newly released kernel’s integration was “likely to be in this October release.”
The news follows the release of version 4.0 of the Linux kernel in a flurry of what T S Eliot would describe as “not with a bang but a whimper”.
Writing on the Linux Kernel Mailing List on Sunday afternoon, Linux overlord Linus Torvalds explained that the new version was being released according to schedule, rather than because of any dramatic improvements, and because of a lack of any specific reason not to.
“Linux 4.0 was a pretty small release in linux-next and in final size, although obviously ‘small’ is relative. It’s still over 10,000 non-merge commits. But we’ve definitely had bigger releases (and judging by linux-next v4.1 is going to be one of the bigger ones),” he said.
“Feature-wise, 4.0 doesn’t have all that much special. Much has been made of the new kernel patching infrastructure, but realistically that wasn’t the only reason for the version number change. We’ve had much bigger changes in other versions. So this is very much a ‘solid code progress’ release.”
Come to think of it, it is very unlikely that T S Eliot would ever have written about Linux kernels, but that’s not the point.
Torvalds, meanwhile, explained that he is happier with releasing to a schedule rather than because of any specific feature-related reason, although he does note that there have been four billion code commits, and Linux 3.0 was released after the two billion mark, so there’s a nice symmetry there.
In fact, back in 2011 the version numbering of the Linux kernel was a matter of some debate, and Torvalds’ lacklustre announcement seems to be pre-empting more of the same.
In a subsequent post Torvalds jokes, “the strongest argument for some people advocating 4.0 seems to have been a wish to see 4.1.15 – because ‘that was the version of Linux Skynet used for the T-800 Terminator.’”
Canonical and Ericsson have announced their arrival into the cloud telecoms market after signing a three-year collaboration to develop network Function Virtualization (NFV) products for software-defined communications networks.
The deal will see Ericsson deploying the Ubuntu Server operating system as the host for all its cloud offerings.
John Zannos, VP of cloud alliances and channels at Canonical, told The INQUIRER: “It’s actually a very exciting time to be alive, with the pace of change in the marketplace. As we move toward software-defined solutions more and more, we’re going to see the accelerating pace of change more than ever.”
By working together, the companies hope to drive adoption of NVF products and accelerate research.
The news comes just a day after Oracle and Intel announced a similar deal based on an Oracle hypervisor to control expansion and contraction of communication network nodes at an intelligent level.
As with that announcement, the Canonical-Ericsson arrangement is based on the interoperability provided by OpenStack, meaning that the alignment between the two projects is set to be much closer than one might expect.
“What is most exciting for us is not just the chance to work with Ericsson, which already carries nearly 40 percent of the world’s mobile traffic, but the opportunities that working together brings for us to take these concepts to the next level,” said Zannos.
Ubuntu is used in 80 percent of OpenStack cloud deployments worldwide. Using Ubuntu Server means that the partnership should be able to bring the newest ideas in open platform NVF.
“Our ability to offer scale-out solutions means that for the first time we can help meet the massive demand on telecoms in the future,” said Zannos.
“I don’t want to speculate on ‘infinite scalability’ because infinite is a pretty big number, but we’re certainly able to create solutions without the restraints of traditional hardware.”
The rollout of open platform NFV acts as a natural next step after the arrival of cloud communication. Virtualizing the workload of global communications, and reducing the natural lag of hardware controllers, allows providers to offer cheaper running costs, lower energy use and greater flexibility to grow and contract the network according to customer need.
Zannos added: “Organizations are struggling to keep pace with data, complexity, cost and compliance demands, so this partnership will help customers overcome many of these challenges.”
The Ericsson name disappeared from the consumer market after Sony acquired the joint Sony-Ericsson venture in 2012, but the Swedish company’s reach remains vast. A venture into virtual telecoms, alongside the biggest single Linux distribution, is bound to disrupt the market.
Ericsson recently became the latest company to join the alliance of Canonical’s Snappy Ubuntu Core for the Internet of Things.
Zannos also confirmed that there will be room for cross-fertilization between the two alliances in the coming months and years, particularly with the opportunities for the silent, seamless firmware upgrades that underpin the technology.
Oracle and Intel have teamed up for the first demonstration of carrier-grade network function virtualization (NFV), which will allow communication service providers to use a virtualized, software-defined model without degradation of service or reliability.
The Oracle-led project uses the Intel Open Network Platform (ONP) to create a robust service over NFV, using intelligent direction of software to create viable software-defined networking that replaces the clunky equipment still prevalent in even the most modern networks.
Barry Hill, Oracle’s global head of NFV, told The INQUIRER: “It gets us over one of those really big hurdles that the industry is desperately trying to overcome: ‘Why the heck have we been using this very tightly coupled hardware and software in the past if you can run the same thing on standard, generic, everyday hardware?’. The answer is, we’re not sure you can.
“What you’ve got to do is be smart about applying the right type and the right sort of capacity, which is different for each function in the chain that makes up a service.
“That’s about being intelligent with what you do, instead of making some broad statement about generic vanilla infrastructures plugged together. That’s just not going to work.”
Oracle’s answer is to use its Communications Network Service Orchestration Solution to control the OpenStack system and shrink and grow networks according to customer needs.
Use cases could be scaling out a carrier network for a rock festival, or transferring network priority to a disaster recovery site.
“Once you understand the extent of what we’ve actually done here, you start to realize just how big an announcement this is,” said Hill.
“On the fly, you’re suddenly able to make these custom network requirements instantly, just using off-the-shelf technology.”
The demonstration configuration optimizes the performance of an Intel Xeon E5-2600 v3 processor designed specifically for networking, and shows for the first time a software-defined solution which is comparable to the hardware-defined systems currently in use.
In other words, it can orchestrate services from the management and orchestration level right down to a single core of a single processor, and then hyperscale it using resource pools to mimic the specialized characteristics of a network appliance, such as a large memory page.
“It’s kind of like the effect that mobile had on fixed line networks back in the mid-nineties where the whole industry was disrupted by who was providing the technology, and what they were providing,” said Hill.
“Suddenly you went from 15-year business plans to five-year business plans. The impact of virtualization will have the same level of seismic change on the industry.”
Today’s announcement is fundamentally a proof-of-concept, but the technology that powers this kind of next-generation network is already evolving its way into networks.
Hill explained that carrier demand had led to the innovation. “The telecoms industry had a massive infrastructure that works at a very slow pace, at least in the past,” he said.
“However, this whole virtualization push has really been about the carriers, not the vendors, getting together and saying: ‘We need a different model’. So it’s actually quite advanced already.”
NFV appears to be the next gold rush area for enterprises, and other consortium are expected to make announcements about their own solutions within days.
The Oracle/Intel system is based around OpenStack, and the company is confident that it will be highly compatible with other systems.
The ‘Oracle Communications Network Service Orchestration Solution with Enhanced Platform Awareness using the Intel Open Network Platform’ – or OCNSOSWEPAUTIONP as we like to think of it – is currently on display at Oracle’s Industry Connect event in Washington DC.
The INQUIRER wonders whether there is any way the marketing department can come up with something a bit more catchy than OCNSOSWEPAUTIONP before it goes on open sale.
HP has announced its first off-the-shelf configured private cloud based on OpenStack and Cloud Foundry.
HP Helion Rack continues the Helion naming convention for HP’s cloud offerings, and will, it is hoped, help enterprise IT departments speed up cloud deployment by offering a solid template system and removing the months of design and build.
Helion Rack is a “complete” private cloud with integrated infrastructure-as-a-service and platform-as-a-service capabilities that mean it should be a breeze to get it working with cloud-dwelling apps.
“Enterprise customers are asking for private clouds that meet their security, reliability and performance requirements, while also providing the openness, flexibility and fast time-to-value they require,” said Bill Hilf, senior vice president of product management for HP Helion.
“HP Helion Rack offers an enterprise-class private cloud solution with integrated application lifecycle management, giving organisations the simplified cloud experience they want, with the control and performance they need.”
HP cites the key features of its product as rapid deployment, simplified management, easy scaling, workload flexibility, faster native-app development and, of course, the open architecture of OpenStack and Cloud Foundry, providing a vast support network for implementation, use cases and customisation.
The product is built on HP ProLiant DL servers, and is assembled by HP and configured with the HP Helion OpenStack and Development Platform. HP and its partners can then work alongside customers to find the best way to exploit the product knowing that it is up and running from day one.
HP Helion Rack will be available in April with prices varying by configuration. Finance is available for larger configurations.
Suse launched its own OpenStack Cloud 5 with Sahara data processing earlier this month, just one of many other implementations of OpenStack designed to help roll out the cloud revolution quickly to enterprises, but offering a complete 360 package is something that HP is pioneering.
The two companies have signed an engineering partnership that the companies believe will lead to a reliable, scalable software-defined networking solution.
Mirantis OpenStack will now inter-operate with Juniper Contrail Networking, as well as OpenContrail, an open source software-defined networking system.
The two companies have published a reference architecture for deploying and managing Juniper Contrail Networking with Mirantis OpenStack to simplify deployment and reduce the need for third-party involvement.
Based on OpenStack Juno, Mirantis OpenStack 6.0 will be enhanced by a Fuel plugin in the second quarter that will make it even easier to deploy large-scale clouds in house.
However, Mirantis has emphasized that the arrival of Juniper to the fold is not a snub to the recently constructed integration with VMware.
Nick Chase of Mirantis explained, “…with this Juniper integration, Mirantis will support BOTH VMware vCenter Server and VMware NSX AND Juniper Networks Contrail Networking. That means that even if they’ve got VMware in their environment, they can choose to use NSX or Contrail for their networking components.
“Of course, all of that begs the question, when should you use Juniper, and when should you use VMware? Like all great engineering questions, the answer is ‘it depends’. How you choose is going to be heavily influenced by your individual situation, and what you’re trying to achieve.”
Juniper outlined its goals for the tie-up as:
- Reduce cost by enabling service providers and IT administrators to easily embrace SDN and OpenStack technologies in their environments
- Remove the complexity of integrating networking technologies in OpenStack virtual data centres and clouds
- Increase the effectiveness of their operations with fully integrated management for the OpenStack and SDN environments through Fuel and Juniper Networks® Contrail SDN Controller
The company is keen to emphasise that this is not meant to be a middle finger at VMware, but rather a demonstration of the freedom of choice offered by open source software. However, it serves as another demonstration of how even the FOSS market is growing increasingly proprietary and competitive.