The Machine initiative, which the firm claims will enable firms to analyse a trillion CRM records in a split second and an entire data center out of one box, was first announced at the HPEs’ Discover event two years ago, and sets out to reinvent the architecture of next-generation computers based on the concept of memory-driven computing using processors closely coupled with non-volatile memory (NVM).
The project is still at a relatively early stage, but it is clear that The Machine will be sufficiently different from current systems as to require an entirely new software ecosystem to function, which is why HPE is opening its doors to developers at this year’s Discover event in Las Vegas.
“Given the fundamental shift in how The Machine will work, the initiative aims to start familiarising developers with its new programming model as well as invite them to help develop the software itself,” the firm said in a statement.
“This is an uncommonly early opportunity for developers to help build components of The Machine from the ground up, since much of the software is in the starting phases.”
The tools initially available include four key code modules that have been created to enable developer communities to evaluate how The Machine is likely to have an effect in applications such as machine learning and graph analytics.
The modules are: a novel database engine that speeds up applications by taking advantage of a large number of CPU cores and NVM; a fault-tolerant programming model for NVM that adapts existing multi-threaded code to take advantage of persistent memory; a Fabric Attached Memory emulator designed to allow users to explore the new architecture; and a DRAM-based performance emulator that uses existing hardware to emulate the latency and bandwidth characteristics of future NVM technology.
HPE said that it will continue to update this code and release additional contributions. Some of these will address changes to operating systems, including Linux, that will be required to enable them to run on The Machine.
HPE also intends to produce sample applications that demonstrate how The Machine can significantly improve application scale and performance.
NASA’s Jet Propulsion Laboratory (JPL) has built a new private cloud based on Red Hat’s build of the OpenStack framework to fulfill the growing computing requirements of its space missions, such as the Mars rovers.
The move was announced to coincide with the OpenStack Summit, and means that NASA’s JPL has access to enterprise-scale computing resources that will enable researchers to tap into their own private cloud and maximise the organisation’s server and storage capacity to process flight projects and research data.
The new cloud has been built by JPL’s own engineers, but Red Hat said that its experience from long-term participation in the OpenStack Foundation and key upstream contributions to specific platform projects made it well suited as the partner for this collaboration.
The move is not NASA’s first involvement with OpenStack. In fact, the entire OpenStack project grew out of a collaboration between the space agency and hosting firm Rackspace to develop an open source cloud computing platform to help drive the administration’s next generation of projects.
Red Hat said that by using its Red Hat OpenStack Platform to build their private cloud, the JPL’s engineers managed to save significant time and resources by retooling and consolidating in-house hardware rather than procuring entirely new infrastructure.
“This is a testament to the reliability, availability and scalability offered by a fully open cloud infrastructure built on Red Hat OpenStack Platform. We are proud of the partnership with NASA JPL to meet their needs for an agile infrastructure to meet their projected growth, while helping to reduce the data centre footprint,” said Radhesh Balakrishnan, Red Hat’s general manager for OpenStack.
Red Hat recently released the latest version of its platform, Red Hat OpenStack Platform 8, as well as the Red Hat Cloud Suite which combines its OpenStack build with the OpenShift Enterprise platform-as-a-service layer for running container-based applications and services.
The software is available to all through the Developer Mode on Windows Settings and it is not a virtual machine. Microsoft will allow native ELF binaries, written for Linux, to run under Windows through a translation layer. It is a bit like the WINE project, which runs native Windows binaries on Linux.
Normally you have to recompile Linux software under Cygwin, or run a Linux virtual machine to get it to run in Windows.
Microsoft claims the new feature offers a considerable advantage in performance and storage space. It also includes the bulk of Ubuntu’s packages, installed via the apt package manager directly from Canonical’s own repositories.
The big question is why. Redmond does not appear to be targeting the server market with this launch but desktop and laptop users. It appears to be mainly of use to developers, who need access to Linux software but for whatever reason wish to keep Windows 10 as their main OS.
Canonical’s Dustin Kirkland said the Windows Subsystem for Linux nearly has equivalent performance to running the software natively under Linux. The only downside is the software is free, but not open source.
General release is scheduled for later this year as part of the Windows 10 Anniversary Update, which will also include support for running Windows Universal Apps on the Xbox One, turning any Xbox One into a development system, the ability to disable V-sync for games installed through the Windows software storefront, ad-blocking support by default in Microsoft Edge, and improved stylus support.
HP Enterprise, which houses former HP corporate hardware and services division has done much better than the cocaine nose jobs of Wall Street expected.
Revenue in HPE’s enterprise group business, from which it derives more than half of its total revenue, rose about 1 percent to $7.1 billion in the first quarter ended January 31, from a year earlier.
This part of the business did not include the loss making PC and printing division and is headed by Margaret Whitman. It also maintained its 2016 adjusted profit forecast of $1.85-$1.95 per share. Net earnings fell to $267 million in the first quarter ended January 31 from $547 million a year earlier.
The company’s revenue fell to $12.72 billion from $13.05 billion. Analysts on average had expected revenue of $12.68 billion. Meanwhile the cut off HP, which is looking after the PC and printer ink is doing rather badly and reported poor results last week.
Canonical has announced the release of the Snappy Ubuntu Core lightweight operating system for another starter kit.
The Intel NUS DE3815TY version is the first fruit of a Canonical and Intel project to create a sstandardizeddevelopment platform for creating and testing x86 Internet of Things (IoT) projects.
The announcement explained: “We focused on the Intel NUC for its relatively low cost point for a starter platform (around $150) and broad availability (you can even find them on Amazon).
“This affordable device running Ubuntu Core offers a simple developer experience, making embedded development accessible to all with a deployment-ready edge computing option for IoT.”
Snappy Ubuntu Core 15.04 is already available as a download image from the Canonical site with a 16.04 LTS version as soon as it’s launched. The use of LTS builds reflects the three-year warranty of the Intel NUC.
The 115mm x 116mm x 40mm NUC is a more comprehensive starter kit than similar devices, boasting an Atom processor and room for up to 8GB of RAM and a 2.5in storage drive on top of the 4GB of onboard MMC memory. It’s designed for a wide range of prototyping use cases.
The blog continued: “This is exactly where snappy Ubuntu Core becomes powerful, combining the upgrade capabilities of Ubuntu Core and the app architecture. You can guarantee that your Intel NUC will satisfy today’s use case as well as tomorrow’s.”
Suggested use cases include digital signs and kiosks, thanks to the fanless design and low cost.
Snappy Ubuntu Core was launched formally at the beginning of 2015 as a stripped down, lightweight developer kit designed for fast IoT app design and deployment.
It requires as little as 600MHz of processing power and 128MB of RAM, meaning that, in comparison, the Intel NUC is like a supercomputer, leaving loads of room for future innovations as the IoT continues to pervade our everyday lives.
Hewlett Packard Enterprises has begun making a series of announcements to coincide with its first major showcase since splitting from HP Inc into an enterprise company with the “mindset of a startup”.
The first is a new product, HPE Synergy, which the company bills as a “new class of system to power the next era in hybrid infrastructure”.
The converged platform allows organisations to run a hybrid infrastructure by taking advantage of ‘fluid resource pools’, software-defined intelligence and a single API, making it easy to strike a continuous balance between on-premise and cloud computing.
Fluid resource pools combine compute, storage and fabric networking which can be composed on a case-by-case, need-by-need basis, booting up ready-to-deploy workloads as it does so covering physical, virtual and containerised workloads.
The software-defined intelligence is able to self-discover and self-assemble the exact configuration and infrastructure needed for repeatable frictionless updates, while the unified API offers 100 percent infrastructure programmability, a bare-metal infrastructure-as-a-service interface and a single line of code to abstract every element of the infrastructure.
The HPE OneView UI offers a single interface for all types of storage within the infrastructure at a glance.
“Market data clearly shows that a hybrid combination of traditional IT and private clouds will dominate the market over the next five years,” said Antonio Neri, executive vice president and general manager of the Enterprise Group at HPE.
“Organisations are looking to capitalise on the speed and agility of the cloud but want the reliability and security of running business-critical applications in their own data centres. With HPE Synergy, IT can deliver infrastructure as code and give businesses a cloud experience in their data centre.”
HPE Synergy is designed to support numerous existing systems from big names including Arista, CapGemini, Chef, Docker, Microsoft, Nvidia and VMware.
It will be available to customers directly and via channel partners starting in the second quarter of 2016. Pricing will be announced at launch.
The maker of expensive printer ink, HP has formally cut itself in two on Monday in a move to turn around its fortunes.
Now there will be two companies, HP Inc and Hewlett Packard Enterprise (HPE). Somewhat appropriately HP Inc will be allowed to keep milking the ink business but it will also have to sell the less useful PCs and printers. The smart money is on HPE which has the company’s services and enterprise server hardware.
The plan was similar to a something that ousted CEO Léo Apotheker came up with in 2011. Apotheker, however, planned to sell-off PCs and printers to raise funds for acquisitions in such cool areas Autonomy software. He never got around to selling off the PCs but he did buy Autonomy.
Both companies have similar turnovers of around $57bn, and the bundling of the profitable printers division with struggling PCs will mean that HP ink will not be dead in the water before it starts.
The operational split between the two companies on 1 August also went smoothly, so really today’s announcement is more just formal.
Some think that there will be some more fine tuning to come with a few more sell offs. Last week, HP flogged its security business, TippingPoint, to Trend Micro and then announced its decision to exit the public cloud market in favour of partnering with Amazon Web Services and Microsoft.
Still it does mean that the restructuring proceeded OK and on time. What will be more interesting is if the two-headed monster can see off competition better than a bigger beast with only one head and a bit of a limp.
HP has made a dramatic U-turn on its public cloud offering, just a week before the company splits in two.
The Helion Public Cloud will be “sunsetted” in January 2016 after failing to keep up with rivals such as Amazon Web Services (AWS).
But Helion product management SVP Bill Hilf explained in a blog post: “As we have before, we will help our customers design, build and run the best cloud environments suited to their needs, based on their workloads and their business and industry requirements.
“To support this new model, we will continue to aggressively grow our partner ecosystem and integrate different public cloud environments. To enable this flexibility, we are helping customers build cloud-portable applications based on HP Helion OpenStack and the HP Helion Development Platform.”
In other words, HP will now find partners to deliver public cloud, within the Helion and Helion OpenStack ecosystems, but not try to sell its own product which has been, for want of a better word, bobbins.
The decision is a direct contradiction of the stand taken in April, when Hilf wrote: “In the past week, a quote of mine in the media was interpreted as HP is exiting the public cloud, which is not the case. Our portfolio strategy to deliver on the vision of hybrid IT continues strong.”
The statement was a retort to a quote in The New York Times coinciding with the first anniversary of the Helion brand, in which Hilf is quoted as saying: “We thought people would rent or buy computing from us. It turns out that it makes no sense for us to go head-to-head.”
Hilf claimed that The New York Times quote was taken out of context.
HP is a leading member of the Cloud28+ initiative that brings together a common standard for cloud service providers, and it will be from this pool that HP Enterprise (as it will then be) is implied to be favouring its future partnerships. Which probably means Amazon.
Ubuntu is to become the basis for the first officially supported Linux-based application for the Microsoft Azure cloud as it continues to expand its ties to the open source community.
Canonical and Microsoft confirmed in a joint announcement that the Hadoop-based big data service offering HDInsight will run on Ubuntu and Hortonworks.
T K Ranga Rengarajan said, corporate vice president of data platform, cloud and enterprise at Microsoft said: “The general availability of Azure HDInsight on Ubuntu Linux, which includes a service level agreement guarantee of 99.9 percent uptime and full technical support for the entire stack, offers the choice of running Hadoop workloads on the Hortonworks Data Platform in Azure HDInsight using Ubuntu or Windows.
“There’s also a growing ecosystem of ISVs delivering tools to create big data solutions on the Azure data platform with HDInsight.”
The news is an official announcement of a service first made available as a preview earlier in the year.
Since that time the unlikely combo said that they have seen growing adoption of HDInsight for Ubuntu as a straightforward way to move Hadoop easily from on-premise to the cloud.
Canonical explained that both companies have a common goal of hybrid cloud computing, including large-scale deployments spanning public and private infrastructures. It also pointed to the fact that there are more big data solutions on Ubuntu than on any other platform.
HDInsight is capable of running a wide variety of open source analytics engines, including Hive, Spark, HBase and Storm.
Microsoft made the announcement as part of recent improvements to its Azure Data Lake service. These include the Azure Data Lake Store, which can provide a single repository for customers to easily capture data of any size, type and speed without forcing changes to their application as data scales, Azure Data Analytics and a new service built on Apache Yarn that dynamically scales the customer environment based on need.
Microsoft revealed recently that the company uses an Azure Switch built on Linux.
The OpenStack Community is turning its attention to support for containers and improving the platform’s enterprise-worthiness, as the OpenStack Foundation celebrated gaining non-profit status from the US government, a move that will free up extra resources for development, the organisation said.
Foundation executive director Jonathan Bryce said at the OpenStack Silicon Valley conference at California’s Computer History Museum that OpenStack has developed over the past five years into a general-purpose “integration engine” for IT departments to build infrastructure that allows them to operate a diverse array of applications and services.
“OpenStack has become a framework for computing that lets you plug in commercial and open source options for virtualisation, storage and networking, which is a key benefit for users. What that points to is that OpenStack operates as an integration engine that can take different types of hardware and software, and integrate them into a unified platform that users can operate applications and services on top of,” he said.
Bryce announced that the OpenStack Foundation, which oversees the activities of the OpenStack developer community, has been officially recognised as a tax-exempt non-profit business by the US government.
“From a practical perspective, this means we will have more resources to invest in the community over the long term,” he said.
Bryce also announced the launch of a new App Dev section on the OpenStack.org website with resources to help developers make better use of the OpenStack APIs, including a whitepaper on containers.
Containers are the hot technology of the moment, as they hold the promise of packaging applications and services for easy deployment in the cloud, with greater density and scalability than using virtual machines. Much of the effort in the OpenStack community is thus now focused on making containers work without being too restrictive or tying users into one container platform or another.
Docker has garnered much publicity for its container technology, but successfully bringing containers to OpenStack involves more than just supporting Docker, as Craig McLuckie, group product manager for Google’s Compute Engine platform, explained.
“There needs to be something to map containers to your OpenStack infrastructure, the compute, storage and network resources, so that applications inside the containers can access these,” he said.
Naturally, McLuckie held up the Kubernetes project that Google founded as a key part of the solution, with other pieces supplied by OpenStack’s Magnum and the Murano project started by OpenStack firm Mirantis.
“Magnum adds Kubenetes to OpenStack, while Mirantis’ Murano provides native Kubernetes package integration,” McLuckie explained, but adding that there is still much work to be done on properly integrating containers into OpenStack.
“We need to work together as a community to ensure that the core service model can span virtual machines and containers, and we need better integration with the Neutron (networking) module and a solution for containers on bare metal,” he said.
“Virtual machines still have a future as they are the only way to achieve the isolation some applications and services need, but for many people containers are the way forward for most workloads.”
Intel has teamed up with OpenStack distribution provider Mirantis to push adoption of the OpenStack cloud computing framework.
The deal, which includes a $100m investment in Mirantis from Intel Capital, will provide technical collaboration between the two companies and look to strengthen the open source cloud project by speeding up the introduction of more enterprise features as well as services and support for customers.
The funding will also bring on board Goldman Sachs as an investor for the first time, the firm said, alongside collaboration from the companies’ engineers in the community on OpenStack high availability, storage, network integration and support for big data.
“Intel is actually providing us with cash, so they’ve bought a co-development subscription from us. Then, in addition, we’ve strengthened our balance sheet by putting more equity financing dollars into the company. So overall the total funds are at $100m,” said Mirantis president and co-founder Alex Freedland.
“With Intel as our partner, we’ll show the world that open design, open development and open licensing is the future of cloud infrastructure software. Mirantis’ goal is to make OpenStack the best way to deliver cloud software, surpassing any proprietary solutions.”
Freedland added that the collaboration means that there’s nothing proprietary in the arrangement that it is flowing directly into open source. No intellectual property is going to Intel.
“All this is community-driven, so everyone will be able to take advantage of it,” he added.
The move is part of the Cloud for All initiative announced by Intel in July.
Intel is becoming increasingly involved in OpenStack. The company said at the OpenStack Summit in May that it is making various contributions, including improving the security of containerised applications in the cloud using the VT-x extensions in Intel processors.
Other big companies are also backing the open source software. Google announced in July that it had joined the OpenStack Foundation as a corporate sponsor in a bid to promote open source and open cloud technologies.
Working closely with other members of the OpenStack community, Google said that the move will bring its expertise in containers and container management to OpenStack while sharing its work with innovative open source projects like Kubernetes.
HP Has released its financial results for the third quarter and they make for somewhat grim reading.
The company has seen drops in key parts of the business and an overall drop in GAAP net revenue of eight percent year on year to $25.3bn, compared with $27.6bn in 2014.
The company failed to meet its projected net earnings per share, which it had put at $0.50-$0.52, with an actual figure of $0.47.
The figures reflect a time of deep uncertainty at the company as it moves ever closer to its demerger into HP and Hewlett Packard Enterprise. The latter began filing registration documents in July to assert its existence as a separate entity, while the boards of both companies were announced two weeks ago.
Dell CEO Michael Dell slammed the move in an exclusive interview with The INQUIRER, saying he would never do the same to his company.
The big boss at HP remained upbeat, despite the drop in dividend against expectations. “HP delivered results in the third quarter that reflect very strong performance in our Enterprise Group and substantial progress in turning around Enterprise Services,” said Meg Whitman, chairman, president and chief executive of HP.
“I am very pleased that we have continued to deliver the results we said we would, while remaining on track to execute one of the largest and most complex separations ever undertaken.”
To which we have to ask: “Which figures were you looking at, lady?”
Breaking down the figures by business unit, Personal Systems revenue was down 13 percent year on year, while notebook sales fell three percent and desktops 20 percent.
Printing was down nine percent, but with a 17.8 percent operating margin. HP has been looking at initiatives to create loyalty among print users such as ink subscriptions.
The Enterprise Group, soon to be spun off, was up two percent year on year, but Business Critical system revenue dropped by 21 percent, cancelled out by networking revenue which climbed 22 percent.
Enterprise Services revenue dropped 11 percent with a six percent margin, while software dropped six percent with a 20.6 percent margin. Software-as-a-service revenue dropped by four percent.
HP Financial Services was down six percent, despite a two percent decrease in net portfolio assets and a two percent decrease in financing volume.
HP has proclaimed that it will buy 12 years of wind power from SunEdison and use it to run a new data centre in Texas.
The firm’s embracing of the wind market follows similar commitments from Facebook, which is planning to run its newest centre, the fifth so far, on wind power alone.
HP said that the 12-year purchase agreement will provide 112MW of wind power sourced from SunEdison and its nearby facilities.
The company said that 112MW could power some 40,000 homes, and will save more than 340,000 tons of carbon dioxide every year.
HP added that the deal puts the firm well on the way to meeting its green goals this year, five years earlier than the 2020 previously stated.
The renewable energy purchase is a first for HP and will power the new 1.5 million square foot data centre in Texas.
“This agreement represents the latest step we are taking on HP’s journey to reduce our carbon footprint across our entire value chain, while creating a stronger, more resilient company and a sustainable world,” said Gabi Zedlmayer, vice president and chief progress officer for corporate affairs at HP.
“It’s an important milestone in driving HP Living Progress as we work to create a better future for everyone through our actions and innovations.”
SunEdison, which HP calls the “world’s largest renewable energy development company”, is predictably excited to be the provider chosen to put the wind up HP servers.
“Wind-generated electricity represents a good business opportunity for Texas and for HP,” said Paul Gaynor, executive vice president, Americas and EMEA, at SunEdison.
“By powering its data centres with renewable energy, HP is taking an important step toward a clean energy future while lowering operating costs.
“At the same time, HP’s commitment allows us to build this project which creates valuable local jobs and ensures Texan electricity customers get cost-effective energy.”
Oracle said weak sales of its traditional database software licenses were made worse by a strong US dollar lowered the value of foreign revenue.
Shares of Oracle, often seen as a barometer for the technology sector, fell 6 percent to $42.15 in extended trading after the company’s earnings report on Wednesday.
Shares of Microsoft and Salesforce.com, two of Oracle’s closest rivals, were close to unchanged.
Daniel Ives, an analyst at FBR Capital Markets said that this announcement speaks to the headwinds Oracle is seeing in the field as their legacy database business is seeing slowing growth.
It also shows that while Cloud business has seen pockets of strength it is not doing as well as many thought,
Oracle, like other established tech companies, is looking to move its business to the cloud-computing model, essentially providing services remotely via data centres rather than selling installed software.
The 38-year-old company has had some success with the cloud model, but is not moving fast enough to make up for declines in its traditional software sales.
Oracle, along with German rival SAP has been losing market share in customer relationship management software in recent years to Salesforce.com, which only offers cloud-based services.
Because of lower software sales and the strong dollar, Oracle’s net income fell to $2.76 billion, or 62 cents per share, in the fourth quarter ended May 31, from $3.65 billion, or 80 cents per share, a year earlier.
Revenue fell 5.4 percent to $10.71 billion. Revenue rose 3 percent on a constant currency basis. Analysts had expected revenue of $10.92 billion, on average.
Sales from Oracle’s cloud-computing software and platform service, an area keenly watched by investors, rose 29 percent to $416 million.
A SENIOR MANAGER at Red Hat has warned the community of the importance of ensuring that OpenStack users have sufficient, qualified support for their infrastructure.
Alessandro Perilli, general manager for cloud management strategy at Red Hat, made the point in a blog post this week entitled Beware scary OpenStack support.
“Enterprise-grade support for any open source project, and especially for one as complex as OpenStack, can be articulated through many dimensions. However, they are almost never part of the conversation until too late,” he wrote.
Perilli goes on to list six key dimensions that system administrators should be looking for: expertise in the underlying operating system; security response; certification and compliance; code indemnification; vertical consulting; and extended cloud management.
He warned that enterprises are in great danger if they don’t stick to well-established Linux distros with experienced knowledge bases.
|When your OpenStack vendor is using a Linux distribution that has been in the market for a very short period (i.e. one year), has no history of contribution to the Linux distribution of choice, and doesn’t even mention its Linux distribution of choice in its marketing materials, this spells scary enterprise support,” he said.
It seems like obvious advice, but Perilli pointed to several major organisations that have fallen foul of this, and the results can be devastating because of the numbers involved in rolling out such an infrastructure.
Other potential pitfalls in the list include vendors that “cannot back port and port a security fix to older and newer versions of OpenStack before it’s fixed in the trunk code”, “have no experience in the legal implications with open source licensing”, “only support their own hardware”, “have a consulting division that consists of four engineers across five continents”, “have a cloud management platform that cannot support side by side server virtualization, IaaS, and PaaS across private and public environments” and many more.
OpenStack is as vulnerable to problems as any other, but being open source means that anyone can offer contributions and anyone can offer themselves as a vendor, a consultant and a self-proclaimed expert.
A recent study found that a Red Hat proprietary solution was among the offerings that was still able to undercut an OpenStack rollout. Meanwhile, its Fedora open source operating system has just reached version 22.
Perilli concluded by saying: “Any OpenStack provider claiming to offer enterprise-grade support must excel in every one of those aforementioned dimensions, not just one of them.”
In other words, it’s not enough to claim to be an OpenStack expert. You have to talk the talk as well as walk the walk.