Subscribe to:

Subscribe to :: ::

Hewlett Packard Enterprises Goes Synergy

December 2, 2015 by Michael  
Filed under Computing

Hewlett Packard Enterprises  has begun making a series of announcements to coincide with its first major showcase since splitting from HP Inc into an enterprise company with the “mindset of a startup”.

The first is a new product, HPE Synergy, which the company bills as a “new class of system to power the next era in hybrid infrastructure”.

The converged platform allows organisations to run a hybrid infrastructure by taking advantage of ‘fluid resource pools’, software-defined intelligence and a single API, making it easy to strike a continuous balance between on-premise and cloud computing.

Fluid resource pools combine compute, storage and fabric networking which can be composed on a case-by-case, need-by-need basis, booting up ready-to-deploy workloads as it does so covering physical, virtual and containerised workloads.

The software-defined intelligence is able to self-discover and self-assemble the exact configuration and infrastructure needed for repeatable frictionless updates, while the unified API offers 100 percent infrastructure programmability, a bare-metal infrastructure-as-a-service interface and a single line of code to abstract every element of the infrastructure.

The HPE OneView UI offers a single interface for all types of storage within the infrastructure at a glance.

“Market data clearly shows that a hybrid combination of traditional IT and private clouds will dominate the market over the next five years,” said Antonio Neri, executive vice president and general manager of the Enterprise Group at HPE.

“Organisations are looking to capitalise on the speed and agility of the cloud but want the reliability and security of running business-critical applications in their own data centres. With HPE Synergy, IT can deliver infrastructure as code and give businesses a cloud experience in their data centre.”

HPE Synergy is designed to support numerous existing systems from big names including Arista, CapGemini, Chef, Docker, Microsoft, Nvidia and VMware.

It will be available to customers directly and via channel partners starting in the second quarter of 2016. Pricing will be announced at launch.



HP Finally Splits

November 4, 2015 by Michael  
Filed under Computing

The maker of expensive printer ink, HP has formally cut itself in two on Monday in a move to turn around its fortunes.

Now there will be two companies, HP Inc and Hewlett Packard Enterprise (HPE). Somewhat appropriately HP Inc will be allowed to keep milking the ink business but it will also have to sell the less useful PCs and printers. The smart money is on HPE which has the company’s services and enterprise server hardware.

The plan was similar to a something that ousted CEO Léo Apotheker came up with in 2011. Apotheker, however, planned to sell-off PCs and printers to raise funds for acquisitions in such cool areas Autonomy software. He never got around to selling off the PCs but he did buy Autonomy.

Both companies have similar turnovers of around $57bn, and the bundling of the profitable printers division with struggling PCs will mean that HP ink will not be dead in the water before it starts.

The operational split between the two companies on 1 August also went smoothly, so really today’s announcement is more just formal.

Some think that there will be some more fine tuning to come with a few more sell offs. Last week, HP flogged its security business, TippingPoint, to Trend Micro and then announced its decision to exit the public cloud market in favour of partnering with Amazon Web Services and Microsoft.

Still it does mean that the restructuring proceeded OK and on time. What will be more interesting is if the two-headed monster can see off competition better than a bigger beast with only one head and a bit of a limp.


HP Drops Helion

October 27, 2015 by Michael  
Filed under Computing

HP has made a dramatic U-turn on its public cloud offering, just a week before the company splits in two.

The Helion Public Cloud will be “sunsetted” in January 2016 after failing to keep up with rivals such as Amazon Web Services (AWS).

But Helion product management SVP Bill Hilf explained in a blog post: “As we have before, we will help our customers design, build and run the best cloud environments suited to their needs, based on their workloads and their business and industry requirements.

“To support this new model, we will continue to aggressively grow our partner ecosystem and integrate different public cloud environments. To enable this flexibility, we are helping customers build cloud-portable applications based on HP Helion OpenStack and the HP Helion Development Platform.”

In other words, HP will now find partners to deliver public cloud, within the Helion and Helion OpenStack ecosystems, but not try to sell its own product which has been, for want of a better word, bobbins.

The decision is a direct contradiction of the stand taken in April, when Hilf wrote: “In the past week, a quote of mine in the media was interpreted as HP is exiting the public cloud, which is not the case. Our portfolio strategy to deliver on the vision of hybrid IT continues strong.”

The statement was a retort to a quote in The New York Times coinciding with the first anniversary of the Helion brand, in which Hilf is quoted as saying: “We thought people would rent or buy computing from us. It turns out that it makes no sense for us to go head-to-head.”

Hilf claimed that The New York Times quote was taken out of context.

HP is a leading member of the Cloud28+ initiative that brings together a common standard for cloud service providers, and it will be from this pool that HP Enterprise (as it will then be) is implied to be favouring its future partnerships. Which probably means Amazon.



Ubuntu Moves To Microsoft’s Azure

September 30, 2015 by Michael  
Filed under Computing

Ubuntu is to become the basis for the first officially supported Linux-based application for the Microsoft Azure cloud as it continues to expand its ties to the open source community.

Canonical and Microsoft confirmed in a joint announcement that the Hadoop-based big data service offering HDInsight will run on Ubuntu and Hortonworks.

T K Ranga Rengarajan said, corporate vice president of data platform, cloud and enterprise at Microsoft said: “The general availability of Azure HDInsight on Ubuntu Linux, which includes a service level agreement guarantee of 99.9 percent uptime and full technical support for the entire stack, offers the choice of running Hadoop workloads on the Hortonworks Data Platform in Azure HDInsight using Ubuntu or Windows.

“There’s also a growing ecosystem of ISVs delivering tools to create big data solutions on the Azure data platform with HDInsight.”

The news is an official announcement of a service first made available as a preview earlier in the year.

Since that time the unlikely combo said that they have seen growing adoption of HDInsight for Ubuntu as a straightforward way to move Hadoop easily from on-premise to the cloud.

Canonical explained that both companies have a common goal of hybrid cloud computing, including large-scale deployments spanning public and private infrastructures. It also pointed to the fact that there are more big data solutions on Ubuntu than on any other platform.

HDInsight is capable of running a wide variety of open source analytics engines, including Hive, Spark, HBase and Storm.

Microsoft made the announcement as part of recent improvements to its Azure Data Lake service. These include the Azure Data Lake Store, which can provide a single repository for customers to easily capture data of any size, type and speed without forcing changes to their application as data scales, Azure Data Analytics and a new service built on Apache Yarn that dynamically scales the customer environment based on need.

Microsoft revealed recently that the company uses an Azure Switch built on Linux.


OpenStack Appears To Be Going Non-Profit

September 1, 2015 by Michael  
Filed under Computing

The OpenStack Community is turning its attention to support for containers and improving the platform’s enterprise-worthiness, as the OpenStack Foundation celebrated gaining non-profit status from the US government, a move that will free up extra resources for development, the organisation said.

Foundation executive director Jonathan Bryce said at the OpenStack Silicon Valley conference at California’s Computer History Museum that OpenStack has developed over the past five years into a general-purpose “integration engine” for IT departments to build infrastructure that allows them to operate a diverse array of applications and services.

“OpenStack has become a framework for computing that lets you plug in commercial and open source options for virtualisation, storage and networking, which is a key benefit for users. What that points to is that OpenStack operates as an integration engine that can take different types of hardware and software, and integrate them into a unified platform that users can operate applications and services on top of,” he said.

Bryce announced that the OpenStack Foundation, which oversees the activities of the OpenStack developer community, has been officially recognised as a tax-exempt non-profit business by the US government.

“From a practical perspective, this means we will have more resources to invest in the community over the long term,” he said.

Bryce also announced the launch of a new App Dev section on the website with resources to help developers make better use of the OpenStack APIs, including a whitepaper on containers.

Containers are the hot technology of the moment, as they hold the promise of packaging applications and services for easy deployment in the cloud, with greater density and scalability than using virtual machines. Much of the effort in the OpenStack community is thus now focused on making containers work without being too restrictive or tying users into one container platform or another.

Docker has garnered much publicity for its container technology, but successfully bringing containers to OpenStack involves more than just supporting Docker, as Craig McLuckie, group product manager for Google’s Compute Engine platform, explained.

“There needs to be something to map containers to your OpenStack infrastructure, the compute, storage and network resources, so that applications inside the containers can access these,” he said.

Naturally, McLuckie held up the Kubernetes project that Google founded as a key part of the solution, with other pieces supplied by OpenStack’s Magnum and the Murano project started by OpenStack firm Mirantis.

“Magnum adds Kubenetes to OpenStack, while Mirantis’ Murano provides native Kubernetes package integration,” McLuckie explained, but adding that there is still much work to be done on properly integrating containers into OpenStack.

“We need to work together as a community to ensure that the core service model can span virtual machines and containers, and we need better integration with the Neutron (networking) module and a solution for containers on bare metal,” he said.

“Virtual machines still have a future as they are the only way to achieve the isolation some applications and services need, but for many people containers are the way forward for most workloads.”


Intel To Invest Heavily In Mirantis For OpenStack

August 26, 2015 by Michael  
Filed under Computing

Intel has teamed up with OpenStack distribution provider Mirantis to push adoption of the OpenStack cloud computing framework.

The deal, which includes a $100m investment in Mirantis from Intel Capital, will provide technical collaboration between the two companies and look to strengthen the open source cloud project by speeding up the introduction of more enterprise features as well as services and support for customers.

The funding will also bring on board Goldman Sachs as an investor for the first time, the firm said, alongside collaboration from the companies’ engineers in the community on OpenStack high availability, storage, network integration and support for big data.

“Intel is actually providing us with cash, so they’ve bought a co-development subscription from us. Then, in addition, we’ve strengthened our balance sheet by putting more equity financing dollars into the company. So overall the total funds are at $100m,” said Mirantis president and co-founder Alex Freedland.

“With Intel as our partner, we’ll show the world that open design, open development and open licensing is the future of cloud infrastructure software. Mirantis’ goal is to make OpenStack the best way to deliver cloud software, surpassing any proprietary solutions.”

Freedland added that the collaboration means that there’s nothing proprietary in the arrangement that it is flowing directly into open source. No intellectual property is going to Intel.

“All this is community-driven, so everyone will be able to take advantage of it,” he added.

The move is part of the Cloud for All initiative announced by Intel in July.

Intel is becoming increasingly involved in OpenStack. The company said at the OpenStack Summit in May that it is making various contributions, including improving the security of containerised applications in the cloud using the VT-x extensions in Intel processors.

Other big companies are also backing the open source software. Google announced in July that it had joined the OpenStack Foundation as a corporate sponsor in a bid to promote open source and open cloud technologies.

Working closely with other members of the OpenStack community, Google said that the move will bring its expertise in containers and container management to OpenStack while sharing its work with innovative open source projects like Kubernetes.


Is HP’s Forthcoming Split A Good Idea?

August 25, 2015 by Michael  
Filed under Computing

HP Has released its financial results for the third quarter and they make for somewhat grim reading.

The company has seen drops in key parts of the business and an overall drop in GAAP net revenue of eight percent year on year to $25.3bn, compared with $27.6bn in 2014.

The company failed to meet its projected net earnings per share, which it had put at $0.50-$0.52, with an actual figure of $0.47.

The figures reflect a time of deep uncertainty at the company as it moves ever closer to its demerger into HP and Hewlett Packard Enterprise. The latter began filing registration documents in July to assert its existence as a separate entity, while the boards of both companies were announced two weeks ago.

Dell CEO Michael Dell slammed the move in an exclusive interview with The INQUIRER, saying he would never do the same to his company.

The big boss at HP remained upbeat, despite the drop in dividend against expectations. “HP delivered results in the third quarter that reflect very strong performance in our Enterprise Group and substantial progress in turning around Enterprise Services,” said Meg Whitman, chairman, president and chief executive of HP.

“I am very pleased that we have continued to deliver the results we said we would, while remaining on track to execute one of the largest and most complex separations ever undertaken.”

To which we have to ask: “Which figures were you looking at, lady?”

Breaking down the figures by business unit, Personal Systems revenue was down 13 percent year on year, while notebook sales fell three percent and desktops 20 percent.

Printing was down nine percent, but with a 17.8 percent operating margin. HP has been looking at initiatives to create loyalty among print users such as ink subscriptions.

The Enterprise Group, soon to be spun off, was up two percent year on year, but Business Critical system revenue dropped by 21 percent, cancelled out by networking revenue which climbed 22 percent.

Enterprise Services revenue dropped 11 percent with a six percent margin, while software dropped six percent with a 20.6 percent margin. Software-as-a-service revenue dropped by four percent.

HP Financial Services was down six percent, despite a two percent decrease in net portfolio assets and a two percent decrease in financing volume.



HP To Use Wind Power

July 23, 2015 by Michael  
Filed under Computing

HP has proclaimed that it will buy 12 years of wind power from SunEdison and use it to run a new data centre in Texas.

The firm’s embracing of the wind market follows similar commitments from Facebook, which is planning to run its newest centre, the fifth so far, on wind power alone.

HP said that the 12-year purchase agreement will provide 112MW of wind power sourced from SunEdison and its nearby facilities.

The company said that 112MW could power some 40,000 homes, and will save more than 340,000 tons of carbon dioxide every year.

HP added that the deal puts the firm well on the way to meeting its green goals this year, five years earlier than the 2020 previously stated.

The renewable energy purchase is a first for HP and will power the new 1.5 million square foot data centre in Texas.

“This agreement represents the latest step we are taking on HP’s journey to reduce our carbon footprint across our entire value chain, while creating a stronger, more resilient company and a sustainable world,” said Gabi Zedlmayer, vice president and chief progress officer for corporate affairs at HP.

“It’s an important milestone in driving HP Living Progress as we work to create a better future for everyone through our actions and innovations.”

SunEdison, which HP calls the “world’s largest renewable energy development company”, is predictably excited to be the provider chosen to put the wind up HP servers.

“Wind-generated electricity represents a good business opportunity for Texas and for HP,” said Paul Gaynor, executive vice president, Americas and EMEA, at SunEdison.

“By powering its data centres with renewable energy, HP is taking an important step toward a clean energy future while lowering operating costs.

“At the same time, HP’s commitment allows us to build this project which creates valuable local jobs and ensures Texan electricity customers get cost-effective energy.”


Oracle Appears To Be Sliding

June 22, 2015 by Michael  
Filed under Computing

Oracle said weak sales of its traditional database software licenses were made worse by a strong US dollar lowered the value of foreign revenue.

Shares of Oracle, often seen as a barometer for the technology sector, fell 6 percent to $42.15 in extended trading after the company’s earnings report on Wednesday.

Shares of Microsoft and, two of Oracle’s closest rivals, were close to unchanged.

Daniel Ives, an analyst at FBR Capital Markets said that this announcement speaks to the headwinds Oracle is seeing in the field as their legacy database business is seeing slowing growth.

It also shows that while Cloud business has seen pockets of strength it is not doing as well as many thought,

Oracle, like other established tech companies, is looking to move its business to the cloud-computing model, essentially providing services remotely via data centres rather than selling installed software.

The 38-year-old company has had some success with the cloud model, but is not moving fast enough to make up for declines in its traditional software sales.

Oracle, along with German rival SAP has been losing market share in customer relationship management software in recent years to, which only offers cloud-based services.

Because of lower software sales and the strong dollar, Oracle’s net income fell to $2.76 billion, or 62 cents per share, in the fourth quarter ended May 31, from $3.65 billion, or 80 cents per share, a year earlier.

Revenue fell 5.4 percent to $10.71 billion. Revenue rose 3 percent on a constant currency basis. Analysts had expected revenue of $10.92 billion, on average.

Sales from Oracle’s cloud-computing software and platform service, an area keenly watched by investors, rose 29 percent to $416 million.


Is RedHat Warns About OpenStack Support

June 12, 2015 by Michael  
Filed under Computing

A SENIOR MANAGER at Red Hat has warned the community of the importance of ensuring that OpenStack users have sufficient, qualified support for their infrastructure.

Alessandro Perilli, general manager for cloud management strategy at Red Hat, made the point in a blog post this week entitled Beware scary OpenStack support.

“Enterprise-grade support for any open source project, and especially for one as complex as OpenStack, can be articulated through many dimensions. However, they are almost never part of the conversation until too late,” he wrote.

Perilli goes on to list six key dimensions that system administrators should be looking for: expertise in the underlying operating system; security response; certification and compliance; code indemnification; vertical consulting; and extended cloud management.

He warned that enterprises are in great danger if they don’t stick to well-established Linux distros with experienced knowledge bases.

|When your OpenStack vendor is using a Linux distribution that has been in the market for a very short period (i.e. one year), has no history of contribution to the Linux distribution of choice, and doesn’t even mention its Linux distribution of choice in its marketing materials, this spells scary enterprise support,” he said.

It seems like obvious advice, but Perilli pointed to several major organisations that have fallen foul of this, and the results can be devastating because of the numbers involved in rolling out such an infrastructure.

Other potential pitfalls in the list include vendors that “cannot back port and port a security fix to older and newer versions of OpenStack before it’s fixed in the trunk code”, “have no experience in the legal implications with open source licensing”, “only support their own hardware”, “have a consulting division that consists of four engineers across five continents”, “have a cloud management platform that cannot support side by side server virtualization, IaaS, and PaaS across private and public environments” and many more.

OpenStack is as vulnerable to problems as any other, but being open source means that anyone can offer contributions and anyone can offer themselves as a vendor, a consultant and a self-proclaimed expert.

A recent study found that a Red Hat proprietary solution was among the offerings that was still able to undercut an OpenStack rollout. Meanwhile, its Fedora open source operating system has just reached version 22.

Perilli concluded by saying: “Any OpenStack provider claiming to offer enterprise-grade support must excel in every one of those aforementioned dimensions, not just one of them.”

In other words, it’s not enough to claim to be an OpenStack expert. You have to talk the talk as well as walk the walk.


Intel Think OpenStack Is Ready For The Enterprise

May 26, 2015 by Michael  
Filed under Computing

The Openstack Framework is rapidly maturing into a business IT platform that is ready for enterprise-grade deployment, according to firms involved in the OpenStack community, including Intel which announced a technology called Clear Containers to secure containerized apps.

The OpenStack Foundation lined up a succession of organisations and vendors at the first OpenStack Summit of 2015 that are working to improve the platform or are already successfully operating it.

Some are using it on massive scale. eBay disclosed that its infrastructure already contains over 300,000 processor cores managed by OpenStack.

The message from many of those using and helping to develop OpenStack is that the platform has come a long way since it started as a joint project between Nasa and Rackspace back in 2010, and has become stable and mature enough for production purposes in a wide variety of use cases.

However, there is still room for improvement, especially when it comes to areas like setting up and updating an OpenStack cloud, according to Imad Sousou, general manager of Intel’s Open Source Technology Centre.

“At Intel, we believe that software-defined infrastructure is the cornerstone of the modern data centre, and OpenStack is the cornerstone of software-defined infrastructure, but there is lot more work to do on it and a lot of sceptics out there,” he said.

Sousou compared OpenStack with Linux, which has taken 20 years or so to mature to the point where organisations can buy something like Red Hat Enterprise Linux which is easy to install and operate.


“We need to get to that level with OpenStack and software-defined infrastructure, and there is a lot of work going on in the community to get there,” he said.

Intel also detailed at the summit how the company is working to improve the security of containerised applications by using the VT-x extensions in its processors to enforce isolation between containers.

This is called Clear Containers, and is part of Intel’s Clear Linux, a lightweight operating system intended for data centre operations with technologies such as container platforms.

“Intel’s approach with Clear Containers offers enhanced protection using security rooted in hardware. By using virtualisation technology features [VT-x] embedded in the silicon, we can deliver the improved security and isolation advantages of virtualisation technology for a containerised application,” said Sousou.

In addition, Intel’s Clear Linux is able to launch a Clear Container in under 200ms, and able to run thousands of them on a single server node, according to Sousou.

Other firms discussing their involvement with OpenStack at the summit included Yahoo, which powers its online services with “hundreds of thousands” of servers managed by OpenStack.

US retail giant Walmart, meanwhile, disclosed that it has about 140,000 cores managed by OpenStack in the infrastructure used to operate its e-commerce platform.

“As production scenarios go, it doesn’t get much more serious than Walmart on Black Friday,” commented OpenStack Foundation executive director Jonathan Bryce.


Openstack Boost Its Hybrid Cloud Services

May 21, 2015 by Michael  
Filed under Computing

The Openstack Foundation has announced new interoperability testing requirements for OpenStack-branded products and is claiming rapid adoption of the federated identity service introduced in the latest OpenStack release that makes it easier to combine private and public cloud resources.

Foundation executive director Jonathan Bryce said at the first OpenStack Summit event of 2015 that the vision for the OpenStack project was to create a “global footprint of interoperable clouds” that would enable users to seamlessly mix and match resources from their own data centre with those of public cloud providers, delivering a so-called hybrid cloud model.

To this end, Bryce announced new interoperability testing requirements for products that are branded as ‘OpenStack Powered’, including public cloud and hosted private cloud services as well as OpenStack distributions.

“This is a big milestone and introduces common code in every distribution that brands itself as OpenStack, and common APIs that have been tested and validated,” he said.

In practice, this means that, along with an OpenStack Powered logo, products will carry a badge to show certification.

This currently applies only to some of the platform’s core modules, such as Nova (compute), Swift (object storage), Keystone (identity service) and the Glance image service.

But it is intended as a guarantee to users that a certified product contains a set of core services consistent with all other OpenStack products that are similarly certified.

Vendors already offering certified products include HP, IBM, Rackspace, Red Hat, Suse and Canonical, but the list is set to expand this year.

“During 2015, this will go across all products that are OpenStack. You will be able to know what you are getting in an OpenStack Powered product, and you will be able to count on those as your solid foundation for cloud,” Bryce said.

Meanwhile, the Kilo release of OpenStack, available since last month, added the Keystone service as a fully integrated module for the first time.

Despite this, OpenStack said that over 30 products and services in the OpenStack application catalogue support federated identify as of today, and that many OpenStack cloud providers have committed to supporting it by the end of this year.

Together, these two announcements are significant for OpenStack’s hybrid cloud proposition, as they will make it much easier to link a customer’s private cloud resources with those of a public cloud provider.

OpenStack Powered certification means that users can count on a consistent environment across the two, while Keystone provides a common authentication system that can integrate with directory services such as LDAP.

One company already taking advantage of this is high-tech post-production firm DigitalFilm Tree which has been working with HP and hosted private cloud firm Bluebox to build a totally cloud-based production system for film and TV content.

The firm demonstrated at the summit how the system enables footage to be captured and uploaded to one cloud, then transferred to another cloud for processing.

Bryce explained that this is just one example of how OpenStack is driving new use cases and expanding what people can do across a variety of industries.

“Interoperability means you can share your cloud footprint. It shows the power of the ‘OpenStack planet’ we are trying to build,” he said.



Openstack Cloud Services More Expensive Than Microsoft and RedHat

May 6, 2015 by Michael  
Filed under Computing

451 Research has revealed that proprietary cloud offerings are currently more cost effective than OpenStack.

The Cloud Price Index showed that VMware, Red Hat and Microsoft all offer a better total cost of ownership (TCO) than OpenStack distributors.

The report blames the shortfall on a lack of skilled OpenStack engineers, leading to a high price for employing them.

Commercial solutions run at around $0.10 per virtual machine hour, compared with $0.08 for OpenStack, but going commercial is cheaper when labour and other external factors are taken into account.

The report claimed that enterprises could hire an extra three percent of staff for a commercial cloud rollout and still save money.

“Finding an OpenStack engineer is a tough and expensive task that is impacting today’s cloud-buying decisions,” said Dr Owen Rogers, senior analyst at 451 Research.

“Commercial offerings, OpenStack distributions and managed services all have their strengths and weaknesses, but the important factors are features, enterprise readiness and the availability of specialists who understand how to keep a deployment operational.

“Buyers need to balance all of these aspects with a long-term strategic view, as well as TCO, to determine the best course of action for their needs.”

Enterprises need to consider whether they may end up locked into a proprietary feature which could then go up in price, or whether features may become decommissioned over time.

451 Research believes that this TCO gulf will narrow in time as OpenStack matures and the talent pool grows.

The research also suggests that OpenStack can already provide a TCO advantage over DIY solutions with a tipping point where 45 percent of manpower is saved by doing so. The company believes that the ‘golden ratio’ is 250 virtual machines per engineer.

OpenStack’s next major release, Kilo has just been released, and Ubuntu and HP are the first iterations to incorporate it.

Red Hat and Ubuntu are major contributors to the OpenStack code, in addition to their proprietary products, along with HP as part of its Helion range.


Citrix Finally Goes OpenStack

April 24, 2015 by Michael  
Filed under Computing

Citrix has become a corporate sponsor of the OpenStack Foundation in a push towards interoperability and unified standards in the cloud community.

As part of the announcement, Citrix said that products including NetScaler and XenServer will be coming to OpenStack.

Citrix has been a contributor to OpenStack for some time, but this sponsorship announcement sees the company ramping up its involvement and integrating its core product lines.

Klaus Oestermann, senior vice president and general manager of delivery networks at Citrix, said: “We’re pleased to formally sponsor the OpenStack Foundation to help drive cloud interoperability standards.

“Citrix products like NetScaler, through the recently announced NetScaler Control Centre, and XenServer are already integrated with OpenStack.

“Our move to support the OpenStack community reflects the great customer and partner demand for Citrix to bring the value of our cloud and networking infrastructure products to customers running OpenStack.”

Citrix already supports the Apache Software Foundation and the Linux Foundation, and has pledged to continue investing in Apache CloudStack and CloudPlatform in addition to its work with OpenStack.

Jonathan Bryce, executive director of the OpenStack Foundation, added: “Diversity and choice are two powerful drivers behind the success of OpenStack and the growing list of companies that have chosen OpenStack as their infrastructure platform.

“We’re glad to see Citrix become a corporate sponsor, and we look forward to the contributions they can bring to the community as it continues driving cloud infrastructure innovation and software maturity.”

Canonical announced on Tuesday that the 15.04 edition of Ubuntu OpenStack will be the first commercially available product to be based on OpenStack Kilo, which is due for release at the end of the month.

Early adopters will get the release candidate, and the full version will follow days after.

Citrix is joining the alliance at an interesting time. Earlier this year, it was revealed that HP has become the largest single contributor to the current OpenStack version, Juno, overtaking Red Hat.

A number of alliances are forming within the OpenStack community to try and gain the upper hand. HP has buddied up with telecoms companies including AT&T and BT, while Juniper and Mirantis have joined forces, though the latter has confirmed that this is not a snub to VMWare.

Citrix coming aboard with its existing ties to Apache and Linux seems to represent another example of the cross-pollination of the OpenStack movement across the industry, with companies clamoring to back it either as a first or second line of opportunity.


RedHat And Canonical Discuss Linux 4.0

April 16, 2015 by Michael  
Filed under Computing

Red Hat has been telling everyone  its plans to integrate the latest Linux 4.0 kernel into its products.

In a statement, a spokesman told us, “Red Hat’s upstream community projects will begin working with 4.0 almost immediately; in fact, Fedora 22 Alpha was based on the RC1 version of the 4.0 kernel.

“From a productization perspective, we will keep an eye on these integration efforts for possible inclusion into Red Hat’s enterprise portfolio.

“As with all of our enterprise-grade solutions, we provide stable, secure and hardened features, including the Linux kernel, to our customers – once we are certain that the next iterations of the Linux kernel, be it 4.0 or later, has the features and maturity that our customer base requires, we will begin packaging it into our enterprise portfolio with the intention of supporting it for 10 years, as we do with all of our products.”

Meanwhile, Canonical Head Honcho Mark Shuttleworth has confirmed that Linux Kernel 4.0 should be making its debut in Ubuntu products before the end of the year.

In an earlier note to The INQUIRER, Shuttleworth confirmed that the newly released kernel’s integration was “likely to be in this October release.”

The news follows the release of version 4.0 of the Linux kernel in a flurry of what T S Eliot would describe as “not with a bang but a whimper”.

Writing on the Linux Kernel Mailing List on Sunday afternoon, Linux overlord Linus Torvalds explained that the new version was being released according to schedule, rather than because of any dramatic improvements, and because of a lack of any specific reason not to.

“Linux 4.0 was a pretty small release in linux-next and in final size, although obviously ‘small’ is relative. It’s still over 10,000 non-merge commits. But we’ve definitely had bigger releases (and judging by linux-next v4.1 is going to be one of the bigger ones),” he said.

“Feature-wise, 4.0 doesn’t have all that much special. Much has been made of the new kernel patching infrastructure, but realistically that wasn’t the only reason for the version number change. We’ve had much bigger changes in other versions. So this is very much a ‘solid code progress’ release.”

Come to think of it, it is very unlikely that T S Eliot would ever have written about Linux kernels, but that’s not the point.

Torvalds, meanwhile, explained that he is happier with releasing to a schedule rather than because of any specific feature-related reason, although he does note that there have been four billion code commits, and Linux 3.0 was released after the two billion mark, so there’s a nice symmetry there.

In fact, back in 2011 the version numbering of the Linux kernel was a matter of some debate, and Torvalds’ lacklustre announcement seems to be pre-empting more of the same.

In a subsequent post Torvalds jokes, “the strongest argument for some people advocating 4.0 seems to have been a wish to see 4.1.15 – because ‘that was the version of Linux Skynet used for the T-800 Terminator.’”