HP has announced a series of new storage offerings aimed at mid-sized businesses, becoming the latest player in the rush to snap up flash-hungry enterprise customers.
Following on from the recent announcement of SanDisk InfiniFlash, the new StoreVirtual 4335 hybrid flash array uses Adaptive Optimization tiering functionality to deliver 12 times more storage for a power and footprint saving of 90 percent per 2-node cluster over traditional hard disks.
The system is compatible with HP StoreVirtual Virtual Storage Appliance (VSA) and HP’s Helion OpenStack.
HP has confirmed that it is able to introduce the hybrid storage platform to mid-sized businesses in a cost-effective manner with just a couple of mouse clicks and zero downtime.
Also new is StoreOnce Backup, which is also aimed at small and medium-scale business deployments. The StoreOnce 2900 protects up to 70TB of data in a single 12-hour window and can restore 41TB in the same time, with up to 31.5TB available in a 2U rackspace footprint. It is fully compatible with HP Recovery Manager Central.
HP Virtual Store Appliance (VSA) offers a fully software-defined storage option with deduplicated disk backup features and app-based virtual machines, including VMware Sphere, Microsoft Hyper-V and Linux KVM.
Though up to 50TB is available, users can start small with a free 1TB service to try it out.
Finally, the StoreEasy NAS storage range has been refreshed with new products the 1450, 1650, 1850 all based on HP ProLiant Gen9 Servers, with 25-times faster RAID rebuilds and disaster recovery options.
By combining StoreEasy and LiveVault TurboRestore, customers can create a hybrid cloud backup with continuous redundant protection at two disparate locations, ideal for disaster recovery.
All of these products will be available by the end of March, with prices starting at $5,500.
The HSA Foundation has issued a new standard which can match up graphics chips, processors and other hardware to boost things like video search.
The downside is that Intel and Nvidia to not appear to have been involved in the creation of the version 1.0 of its Heterogeneous System Architecture specification.
What the standard would mean is that compute, graphics and digital-signal processors will be able to directly address the same physical RAM in a more cache-coherent manner. It will mean the end of external buses and loosely linked interconnects, and allow data to be processed at the same time.
A GPU and CPU can work on the same bits of memory in an application in a multi-threaded way. The spec refers to GPUs and DSPs as “kernel agents” which sounds a bit like corporate spies for KFC.
The blueprints support 64-bit and 32-bit, and map out virtual memory, memory coherency, and message passing, programming models, and hardware requirements.
While the standard is backed by AMD, ARM, Imagination Technologies, MediaTek, Qualcomm, and Samsung, Intel and Nvidia are giving it a miss. The thought is that with these names onboard there should be a enough of a critical mass of developers who will build HSA-compliant games and tools.
Intel has announced details of its first Xeon system on chip (SoC) which will become the new the Xeon D 1500 processor family.
Although it is being touted as a server, storage and compute applications chip at the “network edge”, word on the street is that it could be under the bonnet of robots during the next apocalypse.
The Xeon D SoCs use the more useful bits of the E3 and Atom SoCs along with 14nm Broadwell core architecture. The Xeon D chip is expected to bring 3.4x better performance per watt than previous Xeon chips.
Lisa Spelman, Intel’s general manager for the Data Centre Products Group, lifted the kimono on the eight-core 2GHz Xeon D 1540 and the four-core 2.2GHz Xeon D 1520, both running at 45W. It also features integrated I/O and networking to slot into microservers and appliances for networking and storage, the firm said.
The chips are also being touted for industrial automation and may see life powering robots on factory floors. Since simple robots can run on basic, low-power processors, there’s no reason why faster chips can’t be plugged into advanced robots for more complex tasks, according to Intel.
Every three years I install Linux and see if it is ready for prime time yet, and every three years I am disappointed. What is so disappointing is not so much that the operating system is bad, it has never been, it is just that who ever designs it refuses to think of the user.
To be clear I will lay out the same rider I have for my other three reviews. I am a Windows user, but that is not out of choice. One of the reasons I keep checking out Linux is the hope that it will have fixed the basic problems in the intervening years. Fortunately for Microsoft it never has.
This time my main computer had a serious outage caused by a dodgy Corsair (which is now a c word) power supply and I have been out of action for the last two weeks. In the mean time I had to run everything on a clapped out Fujitsu notebook which took 20 minutes to download a webpage.
One Ubuntu Linux install later it was behaving like a normal computer. This is where Linux has always been far better than Windows – making rubbish computers behave. I could settle down to work right? Well not really.
This is where Linux has consistently disqualified itself from prime-time every time I have used it. Going back through my reviews, I have been saying the same sort of stuff for years.
Coming from Windows 7, where a user with no learning curve can install and start work it is impossible. Ubuntu can’t. There is a ton of stuff you have to upload before you can get anything that passes for an ordinary service. This uploading is far too tricky for anyone who is used to Windows.
It is not helped by the Ubuntu Software Centre which is supposed to make like easier for you. Say that you need to download a flash player. Adobe has a flash player you can download for Ubuntu. Click on it and Ubuntu asks you if you want to open this file with the Ubuntu Software Center to install it. You would think you would want this right? Thing is is that pressing yes opens the software center but does not download Adobe flash player. The center then says it can’t find the software on your machine.
Here is the problem which I wrote about nearly nine years ago – you can’t download Flash or anything proprietary because that would mean contaminating your machine with something that is not Open Sauce.
Sure Ubuntu will download all those proprietary drivers, but you have to know to ask – an issue which has been around now for so long it is silly. The issue of proprietary drives is only a problem for those who are hard core open saucers and there are not enough numbers of them to keep an operating system in the dark ages for a decade. However, they have managed it.
I downloaded LibreOffice and all those other things needed to get a basic “windows experience” and discovered that all those typefaces you know and love are unavailable. They should have been in the proprietary pack but Ubuntu has a problem installing them. This means that I can’t share documents in any meaningful way with Windows users, because all my formatting is screwed.
LibreOffice is not bad, but it really is not Microsoft Word and anyone who tries to tell you otherwise is lying.
I download and configure Thunderbird for mail and for a few good days it actually worked. However yesterday it disappeared from the side bar and I can’t find it anywhere. I am restricted to webmail and I am really hating Microsoft’s outlook experience.
The only thing that is different between this review and the one I wrote three years ago is that there are now games which actually work thanks to Steam. I have not tried this out yet because I am too stressed with the work backlog caused by having to work on Linux without regular software, but there is an element feeling that Linux is at last moving to a point where it can be a little bit useful.
So what are the main problems that Linux refuses to address? Usability, interface and compatibility.
I know Ubuntu is famous for its shit interface, and Gnome is supposed to be better, but both look and feel dated. I also hate Windows 8′s interface which requires you to use all your computing power to navigate through a touch screen tablet screen when you have neither. It should have been an opportunity for Open saucers to trump Windows with a nice interface – it wasn’t.
You would think that all the brains in the Linux community could come up with a simple easy to use interface which lets you have access to all the files you need without much trouble. The problem here is that Linux fans like to tinker they don’t want usability and they don’t have problems with command screens. Ordinary users, particularly more recent generations will not go near a command screen.
Compatibly issues for games has been pretty much resolved, but other key software is missing and Linux operators do not seem keen to get them on board.
I do a lot of layout and graphics work. When you complain about not being able to use Photoshop, Linux fanboys proudly point to GIMP and say that does the same things. You want to grab them down the throat and stuff their heads down the loo and flush. GIMP does less than a tenth of what Photoshop can do and it does it very badly. There is nothing that can do what CS or any real desktop publishers can do available on Linux.
Proprietary software designed for real people using a desktop tends to trump anything open saucy, even if it is producing a technology marvel.
So in all these years, Linux has not attempted to fix any of the problems which have effectively crippled it as a desktop product.
I will look forward to next week when the new PC arrives and I will not need another Ubuntu desktop experience. Who knows maybe they will have sorted it in three years time again.
Microsoft has been running its “personal assistant” Cortana on its Windows phones for a year, and will put the new version on the desktop with the arrival of Windows 10 this autumn. Later, Cortana will be available as a standalone app, usable on phones and tablets powered by Apple Inc’s iOS and Google Inc’s Android, people familiar with the project said.
“This kind of technology, which can read and understand email, will play a central role in the next roll out of Cortana, which we are working on now for the fall time frame,” said Eric Horvitz, managing director of Microsoft Research and a part of the Einstein project, in an interview at the company’s Redmond, Washington, headquarters. Horvitz and Microsoft declined comment on any plan to take Cortana beyond Windows.
The plan to put Cortana on machines running software from rivals such as Apple andGoogle, as well as the Einstein project, have not been reported. Cortana is the name of an artificial intelligence character in the video game series “Halo.”
They represent a new front in CEO Satya Nadella’s battle to sell Microsoft software on any device or platform, rather than trying to force customers to use Windows. Success on rivals’ platforms could create new markets and greater relevance for the company best known for its decades-old operating system.
The concept of ‘artificial intelligence’ is broad, and mobile phones and computers already show dexterity with spoken language and sifting through emails for data, for instance.
Still, Microsoft believes its work on speech recognition, search and machine learning will let it transform its digital assistant into the first intelligent ‘agent’ which anticipates users needs. By comparison, Siri is advertised mostly as responding to requests. Google’s mobile app, which doesn’t have a name like Siri or Cortana, already offers some limited predictive information ‘cards’ based on what it thinks the user wants to know.
Nvidia has fixed an ancient problem in Ubuntu systems which turned the screen into 40 shades of black.
The problem has been around for years and is common for anyone using Nvidia gear on Ubuntu systems.
When opening the window of a new application, the screen would go black or become transparent. As it turns out, this is actually an old problem and there are bug reports dating back from Ubuntu 12.10 times.
However to be fair it was not Nvidia’s fault. The problem was caused by Compiz, which had some leftover code from a port. Nvidia found it and proposed a fix.
“Our interpretation of the specification is that creating two GLX pixmaps pointing at the same drawable is not allowed, because it can lead to poorly defined behavior if the properties of both GLX drawables don’t match. Our driver prevents this, but Compiz appears to try to do this,” wrote NVIDIA’s Arthur Huillet.
Soon after that, a patch has been issued for Compiz and it’s been approved. The patch would be pushed in Ubuntu 15.04 and is likely to be backported to Ubuntu 14.04 LTS.
“Today we’re happy to announce … 64-bit builds for Firefox Developer Edition are now available on Windows, adding to the already supported platforms of OS X and Linux,” wrote Dave Camp, director of developer tools, and Jason Weathersby, a technical evangelist, in a post to a company blog.
Firefox 38′s Developer Edition, formerly called “Aurora,” now comes in both 32- and 64-bit version for Windows. Currently, Mozilla’s schedule, which launches a newly-numbered edition every six weeks, has Firefox 38 progressing through “Beta” and “Central” builds, with the latter — the most polished edition — releasing May 12.
Cook and Weathersby touted the 64-bit Firefox as faster and more secure, the latter due to efficiency improvements in Windows’ anti-exploit ASLR (address space layout randomization) technology in 64-bit.
The biggest advantage of a 64-bit browser on a 64-bit operating system is that it can address more than the 4GB of memory available to a 32-bit application, letting users keep open hundreds of tabs without crashing the browser, or as Cook and Weathersby pointed out, run larger, more sophisticated Web apps, notably games.
Mozilla is the last 32-bit holdout among the top five providers of browsers.
Google shipped a Windows 64-bit Chrome in August 2014 and one for OS X in November, while Apple’s Safari and Microsoft’s Internet Explorer (IE) have had 64-bit editions on OS X and Windows since 2009 and 2006, respectively. Opera Software, the Norwegian browser maker known for its same-named desktop flagship, also offers a 64-bit edition on Windows.
Imagination has revealed a new four-core PowerVR GPU designed to bring high-quality graphics to smaller, cheaper devices such as budget smartphones, wearables and “space-constrained” Internet of Things (IoT) devices.
The PowerVR G6020 GPU is aimed at developers looking to create devices requiring low-power displays, including smartwatches, appliances, connected radios and dashboard screens in vehicles.
“This is a tiny GPU for 720p displays on small entry-level phones and tablets,” a Imagination representative told The INQUIRER at the firm’s booth at Mobile World Congress (MWC).
“The GPU is a stripped-down version of our highest-end GPU, leaving an architecture which is optimised for costs and making it as simple as possible for devices running Android Wear, for example, or [other] IoT devices.”
It can also power mobile hotspots, routers and M2M devices, the firm said.
The PowerVR G6020 GPU has been designed for graphics efficiency in ultra-compact silicon areas, and claims to provide better device performance and compatibility “without unnecessary overhead”.
The unit has four arithmetic logic unit cores, a silicon footprint of 2.2mm2, and a 28nm process technology at 400MHz, and features an optimised universal shading cluster engine designed for better user interface experiences.
Imagination claims that the GPU’s OpenGL ES 3.0 capability gives it a smooth user experience for high-definition displays at 720p.
“PowerVR is the ideal GPU for mobile and embedded because its programmable shader- and tile-based deferred rendering architecture leads to high-performance efficiency and the lowest power consumption per frame,” the firm said.
“In addition, PowerVR maximises bandwidth efficiency with Imagination’s advanced PVRTC2 texture compression technology that ensures minimum memory footprint and superior image quality.”
Also part of the announcement were the PowerVR E5800, E5505 and E5300 video encoders based on an architecture that scales efficiently from the ultra-low power requirements of devices such as wearables.
“The PowerVR 5 series offers the same quality of streaming video at half the bitrate, which is important for video conferencing over mobile networks such as 3G or 4G connections where bandwidth is limited,” the Imagination rep told us.
These PowerVR Series5 encoders support multiple standards in a single solution which leads to area savings and simplifies system integration. For example, it’s no longer necessary to add several cores to handle multiple formats on the same chip or maintain multiple drivers.
Imagination encoded two streams using the same encoder at the same bitrate to show the boost in quality that H.265 video offers (see above).
This also features “region of interest encoding”. This technology shows how companies can build better video conferencing apps by combining PowerVR GPUs and video processors to enhance the focal point of a video stream so that it doesn’t need to work hard at improving the quality of the whole video – just the part which is important.
AT&T Inc will link its connected car and smart home technologies to expand its reach in the fast-growing market for Internet-connected devices, a new battleground for the telecom giant and its rivals.
The wireless company’s home security and automation service “Digital Life” and connected car service “Drive” will be integrated so users can control their homes from a dashboard in their vehicles, Glenn Lurie, chief executive of AT&T Mobility told Reuters last week ahead of the company’s announcement at Mobile World Congress in Barcelona.
“Once you’ve told your home when the car is (for instance)within 20 feet of the house to please open the garage door, put the lights on, turn the alarm off, move the thermostat up, you can have those inanimate objects, the home and your car, really taking care of you,” Lurie said.
With the two services linked up, a “Drive” car can control devices in the home, including security cameras, air-conditioners, coffee makers, stereo systems, door locks, alarm sensors on windows and sensors that detect leaks from water pipes.
Most Americans own a mobile phone, and the $1.7 trillion U.S. wireless industry is turning for growth to connected devices.
AT&T said it had about 20 million connected devices from cars to cargo ship container sensors in 2014, up 21 percent from the year earlier. It has not yet revealed its revenue from its “Internet of Things” business.
Technology companies including Apple and Google are making their own plays. Mercedes-Benz has an application that lets drivers control thermostats from Nest, a company acquired by Google.
Analysts expect fast growth from the “Internet of Things”, or web-connected machines and gadgets. Connected car revenue is expected to be $20 billion annually by 2018 from $3 billion in 2013, and smart homes revenue is estimated to touch $71 billion by 2018, according to Juniper Research.
AT&T has deals with eight automakers from General Motors to Ford on connected car services. Lurie said it was still signing deals.
On the home front, it has partnered with home appliance makers such as Samsung and LG Electronics.
Customers will pay for the new service through AT&T’s Mobile Share Value plan. A user can add $10 to the monthly phone bill to share data across multiple connected devices such as wearables and cars, Lurie said. Or customers can opt for plans provided by their car manufacturer.
Qualcomm has unveiled what it claims is the world’s first ‘ultrasonic’ fingerprint scanner, in a bid to improve mobile security and further boost Android’s chances in the enterprise space.
The Qualcomm Snapdragon Sense ID 3D Fingerprint technology debuted during the chipmaker’s Mobile World Congress (MWC) press conference on Monday.
The firm claimed that the new feature will outperform the fingerprint scanners found on smartphones such as the iPhone 6 and Galaxy S6.
Qualcomm also claimed that, as well as “better protecting user data”, the 3D ultrasonic imaging technology is much more accurate than capacitive solutions currently available, and is not hindered by greasy or sweaty fingers.
Sense ID offers a more “innovative and elegant” design for manufacturers, the firm said, owing to its ability to scan fingerprints through any material, be it glass, metal or sapphire.
This means, in theory, that future fingerprint sensors could be included directly into a smartphone’s display.
Derek Aberle, Qualcomm president, said: “This is another industry first for Qualcomm and has the potential to revolutionise mobile security.
“It’s also another step towards the end of the password, and could mean that you’ll never have to type in a password on your smartphone again.”
No specific details or partners have yet been announced, but Qualcomm said that the Sense ID technology will arrive in devices in the second half of 2015, when the firm’s next-generation Snapdragon 820 processor is also tipped to debut.
The firm didn’t reveal many details about this chip, except that it will feature Kryo 64-bit CPU tech and a new machine learning feature dubbed Zeroth.
Qualcomm also revealed more details about LTE-U during Monday’s press conference, confirming plans to extend LTE to unused spectrum using technology integrated in its latest small-cell solutions and RF transceivers for mobile devices.
“We face many challenges as demand for data constantly grows, and we think the best way to fix this is by taking advantage of unused spectrum,” said Aberle.
Finally, the chipmaker released details about a new a partnership with Cyanogen, the open-source outfit responsible for the CyanogenMod operating system.
Qualcomm said that it will provide support for the best features and UI enhancements of CyanogenMod on Snapdragon processors, which will be available for the release of Qualcomm Reference Design in April.
The MWC announcements follow the launch of the ARM Cortex-based Snapdragon 620 and 618 chips last month, which promise to improve connectivity and user experience on high-end smartphones and tablets.
Aberle said that these chips will begin to show up in devices in mid to late 2015.
BlackBerry Ltd announced on Monday it has to plans to roll out a cloud-based version of its device management platform BES12, a move that will make the service more accessible to small- and medium-sized businesses that need to secure devices on their own networks.
Waterloo, Ontario-based BlackBerry has built a reputation around its device management and security capabilities, catering mainly to the needs of large government agencies and corporations. With data security needs becoming more critical, and a number of new entrants in the field nipping at its heels, BlackBerry said it is now broadening its offerings.
BlackBerry’s new BES12 platform manages and secures not only BlackBerry devices, but also those powered by operating systems such as Google Inc’s Android, Apple’s iOS and Microsoft Corp’s Windows platform. It can also manage and secure medical diagnostic equipment, industrial machinery and even cars.
By offering a less costly cloud-based version of the system, BlackBerry hopes to attract a wider range of small- and medium-sized businesses that need these capabilities, but do not have the capacity to install and manage expensive servers of their own.
“We are trying to broaden the enterprise mobility management space,” said BlackBerry Chief Operation Officer Marty Beard on a conference call with media. “And a cloud version really enables us to broaden our footprint.”
The new cloud-based offering, unveiled at the Mobile World Congress in Barcelona on Monday, will be offered to customers later this month.
India’s Essar Group, a conglomerate with more than 60,000 employees spread across over two dozen countries, has signed up for a trial of the cloud-based version.
Beard said BlackBerry is seeing growing demand from smaller companies for cloud-based device management offerings, but is also getting demand from larger companies that have certain divisions or groups that need cloud-based capabilities.
China’s Lenovo Group Ltd announced that it will offer free subscriptions to Intel Corp’s security software to customers who purchased laptops that were shipped with a program known as “Superfish,” which made PCs vulnerable to cyberattacks.
Lenovo, the world’s biggest personal computer maker, last week advised customers to uninstall the Superfish program.
Security experts and the U.S. Department of Homeland Security recommended the program be removed because it made users vulnerable to what are known as SSL spoofing techniques that can enable remote attackers to read encrypted web traffic, steal credentials and perform other attacks.
Lenovo announced the offer to provide six-month subscriptions to Intel’s McAfee LiveSafe on Friday as it also disclosed plans to “significantly” reduce the amount of software that it ships with new computers.
Pre-loaded programs will include Microsoft Corp’s Windows operating system, security products, Lenovo applications and programs “required” to make unique hardware such as 3D cameras work well, Lenovo said.
“This should eliminate what our industry calls ‘adware’ and ‘bloatware,’” the Lenovo statement said.
Adi Pinhas, chief executive of Palo Alto, California-based Superfish, said in a statement last week that his company’s software helps users achieve more relevant search results based on images of products viewed.
He said the vulnerability was “inadvertently” introduced by Israel-based Komodia, which built the application that Lenovo advised customers to uninstall.
Komodia declined comment.
The new alert pops up in Chrome when a user aims the browser at a suspect site but before the domain is displayed. “The site ahead contains harmful programs,” the warning states.
Google emphasized tricksters that “harm your browsing experience,” and cited those that silently change the home page or drop unwanted ads onto pages in the warning’s text.
The company has long focused on those categories, and for obvious, if unstated, reasons. It would prefer that people — much less, shifty software — not alter the Chrome home page, which features the Google search engine, the Mountain View, Calif. firm’s primary revenue generator. Likewise, the last thing Google wants is to have adware, especially the most irritating, turn off everyone to all online advertising.
The new alert is only the latest in a line of warnings and more draconian moves Google has made since mid-2011, when the browser began blocking malware downloads. Google has gradually enhanced Chrome’s alert feature by expanding the download warnings to detect a wider range of malicious or deceitful programs, and using more assertive language in the alerts.
In January 2014, for example, Chrome 32 added threats that posed as legitimate software and tweaked with the browser’s settings to the unwanted list.
The browser’s malware blocking and suspect site warnings come from Google’s Safe Browsing API (application programming interface) and service; Apple’s Safari and Mozilla’s Firefox also access parts of the API to warn their users of potentially dangerous websites.
Chrome 40, the browser’s current most-polished version, can be downloaded for Windows, OS X and Linux from Google’s website.
Microsoft will double the per-PC price of support for enterprises still holding onto Windows XP systems when the anniversary of the aged OS’s retirement rolls around in April, according to a licensing expert familiar with the situation.
The per-PC price for what Microsoft calls “custom support agreements” (CSAs) will increase to $400, the expert said after requesting anonymity.
CSAs provide critical security updates for an operating system that’s been officially retired, as Windows XP was on April 8, 2014. CSAs are negotiated on a company-by-company basis and also require that an organization has adopted a top-tier support plan, dubbed Premier Support, offered by Microsoft.
The CSA failsafe lets companies pay for security patches beyond the normal support lifespan while they finish their migrations to newer editions of Windows. Most enterprises have shifted — and are continuing to do so — to Windows 7 rather than adopt Windows 8.1.
Last year, just days before Microsoft retired Windows XP, the company slashed the price of CSAs to $200-per-device with a cap of $250,000.
Because a CSA is an annual-only program — and Microsoft limits each organization to just three years of post-retirement support — agreements must be renewed each year. The first renewals come due in less than two months.
Ideally, companies that signed up for a CSA last year will have retired large numbers of Windows XP machines in the interim. If a firm reduced the number of Windows XP PCs by half, it will pay the same as last year if it renews the agreement at the higher per-device price.
It’s difficult to gauge the persistence of Windows XP in commercial settings, but the operating system, which debuted in 2001, continues to appear in analytics firms’ tracking.
According to U.S.-based Net Applications, for example, the global user share of XP stood at 20.7% of all Windows-powered PCs in January, representing more than 300 million machines. Meanwhile, Irish metrics company StatCounter pegged XP’s usage share at 12% for January.
AMD has confirmed that it is releasing new AMD A8-7650K APUs today.
The chips are based on the “Kaveri” design and are designed for overclockers on a budget.
The APU has four “Steamroller” cores (two dual-core modules) operating at 3.30GHz/3.90GHz clock-rate, 4MB L2 cache, AMD Radeon R7 graphics engine with 384 stream processors, a dual-channel DDR3 memory controller, unlocked multiplier and up to 95W thermal design power. The chip will be drop-in compatible with FM2+ mainboards.
AMD will officially start to sell its A8-7650K on the 20 February, 2015. In Japan, where prices are traditionally a bit higher than in the rest of the world, the APU will cost $117.
The new chip is slower than the company’s A8-7700K, which AMD discontinued late last year. That said, it is not completely clear why the company decided to replace an APU with a product with lower performance and did not just drop the price of the A8-7700K.
Later this year AMD plans to release a family of A-series APUs known as “Kaveri Refresh” and “Godovari” which will have higher clock-rates.