Oracle is building a third data-center in the UK, to service the British administration’s G-Cloud plans right next to the sweet smelling Mars Chocolate factory.
According to the company, the new data-center, opening in July, is located in Slough. It will offer cloud services and infrastructure as a service, to government bodies as well as to independent software vendors working on state contracts. Oracle president Mark Hurd said in a press release that the new Equinix Slough data center, will supplements the existing facilities at Linlithgow near Edinburgh and in Slough.
“As this whole cloud evolves and develops, you’ve got a lot of issues that come up. You’ve got security concerns, you’ve got data-sovereignty issues, you’ve got regulatory issues, you’ve got various issues that come up about the location of data — some of those are the physical location of data,” Hurd said.
The new data-center is specifically for government projects. It will meet the specific requirements of G-Cloud, including the IL3 security protocols as well. Hurd claims that it will be ring-fenced data-center, specifically to serve UK government, which is one of Oracle’s biggest clients in the UK.
Hurd said the company now has more than $1bn in cloud subscription revenue and claimed the company was now the second biggest player in the cloud.
“We’re globalising our capability. We have a very broad distribution capability so we sell close to the customer and we move our capabilities close to the customer as well,” Hurd said.
Dell’s profit for the quarter, ended May 3, was $130 million, down 79 percent from $635 million in the same quarter a year earlier. Revenue declined 2 percent to $14.07 billion.
Dell’s PC division was particularly hard hit. Sales for the quarter were down 9 percent to $8.9 billion, Dell said, and the group’s operating profit skidded 65 percent lower to $224 million. Laptop sales were hit especially hard.
Its enterprise business showed mixed performance. Sales of servers and network gear were up 14 percent but storage was down 10 percent. Dell’s services division reported a 2 percent increase in revenue.
Dell is trying hard to build an enterprise software business, which it hopes will eventually generate higher profits than its PC division. The software group reported an operating loss for the quarter, however, as Dell invested in new sales and R&D staff.
Dell’s earnings for the quarter on a pro forma basis, which excludes one-time items, were $0.21 a share, well off the analyst forecast of $0.35 a share, according to Thomson Reuters.
In a statement, CFO Brian Gladden said Dell’s profits were affected by steps it took to improve its competitiveness. “We’ll also continue to make important investments to support our strategy and drive long-term profitability,” he said.
Michael Dell announced in February that he planned to take the company private in a deal with Silver Lake Partners valued at $24.4 billion. The company founder has said he wants some breathing room to focus on long term investments without the constant scrutiny from Wall Street.
As we draw closer to the launch of Intel’s 4th generation Core CPUs, or Haswell, it is no wonder that we are starting to see more leaks and one showing Intel’s Core i7 4770K overclocked to 5GHz at 0.9V certainly drew a lot of attention.
An impressive overclocking achievement was spotted by Ocaholic.ch and shows a CPU-Z validation of Core i7 4770K overclocked to exactly 5005.83MHz at just 0.904V. As far as we can tell, Hyper-threading was disabled and it is not clear if the CPU is actually stable enough to run anything, but in any case, it is still an impressive result, especially at such low voltage.
The rest of the specs include 4GB of DDR3 memory and ASRock’s upcoming Z87 Extreme4 motherboard.
Intel is rather slow when it comes to the adoption of new wireless standards. Most, if not all, notebooks based on Intel platforms today feature 802.11n capable wireless and with the help of a few antennas it can get you between 150 and 450Mbits.
In reality 802.11ac is usually much slower than 150 to 450Mbits but since the middle of last year 802.11ac routers started to show up all around the world. This new standard can get you to 866Mbits and even higher, but Intel has been rather slow to adopt it.
Intel has promised that both Shark Bay notebook and desktop platforms for 2013 will get support for 802.11ac. The card is based on a 2×2 dual band configuration and will support speeds up to 867Mbits per second, in addition, it will support wireless 1080p display, Intel smart connect, Intel Vpro (only with Y and U processors for notebooks) and Bluetooth.
This is the first product based on 802.11ac but we believe that with time Intel will add more choices to its wireless portfolio as 3×3 802.11ac configuration should potentially run even faster. It will be interesting to test this new card in the real world and see if 802.11ac wireless can get you any faster than 802.11n in real life applications.
Games publisher EA believes things will turn around for the company next year. This year has been pretty unpleasant for the company after its trusted DRM sunk its flagship SimCity release.
But Electronic Arts seems to think that is all behind it and has forecast fiscal 2014 earnings above Wall Street’s expectations. EA has been cutting staff and reorganizing studios in recent months to embrace new game platforms. It is preparing a new batch of games including the latest installment of its “Battlefield” shooter game franchise.
Digital revenue, from mobile games, online offerings and other newer sales channels, rose 45 percent year-over-year to $618 million, larger than EA’s packaged goods business in the fourth quarter ended on March 31. It thinks that consumers have held back from buying hardware and software as they await new versions of Sony’s PlayStation and Microsoft Xbox expected later this year.
The video game maker forecast revenue of $4 billion, in line with Wall Street’s expectations. Weakness in the packaged games market dented revenue, but EA recognized $120 million of deferred payments from its “Battlefield Premium” service in the fourth quarter.
For the latest quarter, total revenue declined to $1.2 billion from $1.37 billion a year ago. Adjusted revenue rose 6.4 percent to $1.04 billion over the same period, barely beating analysts’ average estimate of $1.03 billion.
Net income fell to $323 million from $400 million last year.
It appears that the Ouya is going to be a bit delayed.
This is good news though, as it is being delayed because the console developers have more cash to spend on it, $15m more to be precise.
Ouya already raised around $7m on Kickstarter, and now, when it should be taking its last steps towards completion, it has had almost twice as much more injected into it by lovely venture capitalists.
We were expecting the console in early June, but that has slid back to 25 June. The time and money will in part be used to solve an issue with sticky buttons, something that usually only happens once consumers have taken some hardware home with them.
The money comes from venture capital firms and other companies including Kleiner Perkins Caufield & Byers (KPCB), Nvidia, Shasta Ventures, and Occam Partners. KPCB’s general partner Bing Gordon will join the Ouya board of directors as a result.
“We want Ouya to be here for a long time to come,” said Julie Uhrman, Ouya founder and CEO.
“The message is clear: people want Ouya. We first heard this from Kickstarter backers who provided more than $8 million to help us build Ouya, then from over 12,000 developers who have registered to make an Ouya game, next from retailers who are carrying Ouya online and soon on store shelves, and now from top pioneering investors.”
Gordon is in charge of digital investments at KPCB and is a veteran of the games industry, having started at Electronic Arts in 1982.
“Ouya’s open source platform creates a new world of opportunity for established and emerging independent game creators and gamers alike,” he said.
“There are some types of games that can only be experienced on a TV, and Ouya is squarely focused on bringing back the living room gaming experience. Ouya will allow game developers to unleash their most creative ideas and satisfy gamers craving a new kind of experience.”
Ouya consoles should start arriving in living rooms on 25 June. If you want one, you are going to have to come up with around $100 dollars, plus another $50 dollars if you want two controllers.
The founder of MySQL Michael Widenius “Monty” claims that Oracle is killing off his MySQL database and he is recommending that people move to his new project MariaDB. In an interview with Muktware Widenius said his MariaDB, which is also open source, its on track to replacing MySQL at WikiMedia and other major organizations and companies.
He said MySQL was widely popular long before MySQL was bought by Sun because it was free and had good support. There was a rule that anyone should get MySQL up and running in 15 minutes. Widenius was concerned about MySQL’s sale to Oracle and has been watching as the popularity of MySQL has been declining. He said that Oracle was making a number of mistakes. Firstly new ‘enterprise’ extensions in MySQL were closed source, the bugs database is not public, and the MySQL public repositories are not anymore actively updated.
Widenius said that security problems were not communicated nor addressed quickly and instead of fixing bugs, Oracle is removing features. It is not all bad. Some of the new code is surprisingly good by Oracle, but unfortunately the quality varies and a notable part needs to be rewritten before we can include it in things like MariaDB. Widenius said that it’s impossible for the community to work with the MySQL developers at Oracle as it doesn’t accept patches, does not have a public roadmap and there was no way to discuss with MySQL developers how to implement things or how the current code works.
Basically Oracle has made the project less open and the beast has tanked, while at the same time more open versions of the code, such as MariaDB are rising in popularity.
SOA Software has launched an application programming interface (API) gateway today that allows businesses to expose their API’s with a built-in cloud based developer community, helping to grow their services and make it quicker for them to get up and running.
The firm’s CTO Alistair Farquharson said the API Gateway is unique due to it being a new concept in API and SOA management, aiming to “deliver new advantages in the application-level security space”.
“The new API Gateway provides monitory, security, and more uniquely, a developer community as well, so kind of a turnkey approach to an API gateway where a customer can buy that product, get it up and running, expose their API and expose the developer community to the outside world,” Farquharson said.
“[It will] support and manage the porting of mobile applications or web apps or B2B partnerships.”
Farquharson explained that there are three main components within the Gateway, which SOA Software has termed a “unified services gateway”, including a runtime component, a policy manager, and a developer community.
The runtime component handles the message traffic, whereas the policy manager component is capable of managing a range of different policies, such as threat protection, authentication, authorisation, anti-virus, monitorin, auditing, logging, for example.
“The whole objective here is to get a customer up and running with API’s as quickly as possible to meet some kind of a business need that they have, whether that’s mobile an application initiative or a web application, integration or syndication,” Farquharson added.
The third component is the API’s cloud-based “developer community”, which exposes an organisation to the outside world so developers can come take a look at its API, read its documentation, and see what APIs it has to figure out how to interact with them.
It’s this component that sets SOA Software’s Gateway apart form other firms doing similar appliances on the market, claims Farquharson.
“It essentially becomes the developer site for your organisation, with it all running on a single appliance which is rather unique,” he added.
“The interesting thing about the gateway is that it does API’s as well as services [that are] needed for mobile devices so you have old and the new encapsulated in the single appliance, which is very important to our customers.”
The developer community is offered through the API as a service, “like the Salesforce of APIs”, Farquharson said.
“Developers can go there and build their community and it provides them with high level service and availability and saglobla infrastructure and leverage the strength of their community to get themselves going.”
A few years ago it would have been impossible for Intel to acquire AMD, simply due to regulatory constraints put in place by the FTC and the European Union. Intel had more than a 60 percent of the PC and notebook market, so picking up AMD, a company that has some 20 percent of the market, would make Intel a real monopoly.
In the last two years the iPad, smartphones and ARM based tablets have changed the landscape, eating up Intel’s revenue and market share. It is true that most people, especially professionals and the business crowd, use x86 processors, but this is rapidly changing as home users are happy with emailing, browsing and playing some games on their iPad or other tablets. This puts Intel in a world trouble, as the PC market nosedived by 14 percent last quarter, due to a lack of interest for new devices and upgrade.
Tablets are becoming couch browsing devices, people use their smartphones to read news on the go and sometimes at home. More and more users don’t even touch their notebooks or desktops at home. With ARM staying the dominant instruction set in the phone and tablet space, Intel is facing a serious issue as Apple, Samsung, Qualcomm and Nvidia are all making money on ARM chips.
With this in mind, this would be the main reason for Intel to pick up AMD. AMD would not cost them that much, as Intel still has billions in bank, but with AMD, Intel would gain great graphics, something that the company has been struggling to crack for many years. It would make Intel slightly more competitive, but it would not solve all of its problems.
ARM manufacturers also face challenges, they need to produce more powerful chips and deliver a better user experience in order to win more notebooks and detachable devices, but this is going well with non-Apple based tablets. Apple uses ARM, so in the tablet world ARM is winning this fight, but Qualcomm and Nvidia as two independent chip manufactures could do a much better job at getting popular design wins. The Snapdragon S800 and Tegra 4 will get these two companies a step closer, while Apple will continue making good chips for iPads and iPhones. Let’s not forget about Samsung, as it makes many chips for its phones and tablets.
AMD gained 14 percent on May 1st, and an additional 5.9 percent yesterday, getting its stocks up to $3.41. Back on April 30th, AMD stock was trading at $2.68. In last three days of trading AMD gained 27.24 percent or $0.73 per share, which is a huge leap for a company with a 52-week low of just $1.81
AMD has said the memory architecture in its heterogeneous system architecture (HSA) will move management of CPU and GPU memory coherency from the developer’s hands down to the hardware.
While AMD has been churning out accelerated processing units (APUs) for the best part of two years now, the firm’s HSA is the technology that will really enable developers to make use of the GPU. The firm revealed some details of the memory architecture that will form one of the key parts of HSA and said that data coherency will be handled by the hardware rather than software developers.
AMD’s HSA chips, the first of which will be Kaveri, will allow both the CPU and GPU to access system memory directly. The firm said that this will eliminate the need to copy data to the GPU, an operation that adds significant latency and can wipe out any gains in performance from GPU parallel processing.
According to AMD, the memory architecture that it calls HUMA – heterogeneous unified memory access, a play on unified memory access – will handle concurrency between the CPU and GPU at the silicon level. AMD corporate fellow Phil Rogers said that developers should not have to worry about whether the CPU or GPU is accessing a particular memory address, and similarly he claimed that operating system vendors prefer that memory concurrency be handled at the silicon level.
Rogers also talked up the ability of the GPU to take page faults and that HUMA will allow GPUs to use memory pointers, in the same way that CPUs dereference pointers to access memory. He said that the CPU will be able to pass a memory pointer to the GPU, in the same way that a programmer may pass a pointer between threads running on a CPU.
AMD has said that its first HSA-compliant chip codenamed Kaveri will tip up later this year. While AMD’s decision to give GPUs access to DDR3 memory will mean lower bandwidth than GPGPU accelerators that make use of GDDR5 memory, the ability to address hundreds of gigabytes of RAM will interest a great many developers. AMD hopes that they will pick up the Kaveri chip to see just what is possible.
Intel has announced that it will launch its next generation Haswell processors at Computex.
Intel showed running Haswell silicon to journalists last month at the Game Developers Conference (GDC) in a bid to talk up the upcoming chip’s GPU. Last Friday the firm announced what some already knew and many had already guessed, that it will launch Haswell at Computex in June.
Intel published a blog post on 26 April saying that the fourth generation Core processor known as Haswell would arrive in 3,337,200,000,000,000 nanoseconds, which worked out to just under 39 days. The countdown figure matched perfectly with the start of Computex on 4 June, and confirmed what an Intel insider said that the chip would be launched at Computex.
The fact that Intel is using Computex to launch its next generation chip is not surprising, given that there are few big IT shows during the summer and launching the chip later will not give the firm’s system builder and OEM partners enough time to gear up marketing for the lucrative back to school and holiday buying seasons.
While Intel’s Haswell launch is a big event for the firm, it isn’t the most important. Rather, the firm is expected to launch updated low-power Atom chips that it hopes will help it compete in the tablet market, a market that is growing, as opposed to the PC market that Haswell addresses.
Intel’s decision to launch at Computex means that the late spring computer industry show should be awash with updated notebook and desktop PCs, as well as the firm’s preferred ultrabook branded laptops.
CA Technologies has acquired application programming interface (API) management and security provider Layer 7 Technologies, giving it an inroad into an area of technology boosted by the growth of mobile and cloud.
APIs are designed to let applications talk to each other, so that an e-commerce site can process an online transaction calling up a user’s bank details or a smart meter can connect to a utility system and back to an energy monitoring company. Canadian firm Layer 7, founded in 2002, offers technology that manages these API integrations to check that they are working properly and securely.
According to CA, the acquisition will let customers deploy cloud, mobile and “internet of things” initiatives, accelerate service delivery and govern API activity to enforce SLAs. CA plans to combine the Layer 7 technology with its own identity management and Lisa application delivery suite.
Layer 7 pointed out that there were more than 8,000 public APIs available at the end of 2012, saying “there is a vast library of proprietary components and data that need to be managed and secured from unauthorized access”.
Jacob Lamm, EVP of strategy and corporate development at CA, said the firm is “really really excited” about the Layer 7 deal, which has only just been signed so still has to officially go through. He explained that the technology is a critical part of rounding out CA’s authorization and authentication services.
“Think of the front door as the identity management, you knock on the door, we need to tell if you are who you say you are,” he stated.
“The back door are the applications, the APIs. Now especially with the cloud, with mobility, any application can be connected to hundreds of other services. How do I know they are who they say they are? We need to manage the connections between all those applications. API governance and security, that’s what Layer 7 adds to our security perimeter.”
Terms of the deal were not disclosed.
The Layer 7 acquisition by CA follows hot on the heels of Intel’s purchase of Mashery last week. Mashery also offers developers a way to manage application programming interfaces (APIs). Intel said that the team will report to its Services Division, founded in 2011 in a bid to have a potential revenue stream from devices that don’t use its chips.
ARM posted market-beating first-quarter financial results, thanks to strong demand for its chip designs. The company forecast that annual revenue would be in-line with market expectations.
ARM’s first quarter revenue rose 28 per cent to $170.3 million from $132.5 million a year earlier. Analysts had expected a 20 per cent rise in revenue to $158.8 million. The company made an adjusted pretax profit rose 44 per cent to $89.4 million from $61.9 million a year earlier. Analysts were expecting a 25 per cent jump to $77.6 million.
Chief Executive Warren East said the company has “delivered another quarter of strong revenue and earnings growth, driven by robust licensing and record royalty revenue.” ARMs royalty revenues again outpaced the wider semiconductor industry, riven by market share gains in key end markets including digital TVs and microcontrollers, he said. ARM also continues to benefit from the growth in smartphones and tablets.
This year ARM said it had made an encouraging start with more leading companies choosing to sign up to ARM technology. More than 22 processor licenses were signed in the first quarter ended March 31 from smartphones, mobile computing, digital television and other technology.
In the quarter there were more than 2.6 billion ARM-based chips were shipped, up 35 per cent from a year earlier.
We have already mentioned Intel’s one-chip Haswell platform on several occasions, but we have managed to get a few extra details about this chip. As we have stated many times before 1 chip Haswell has BGA packaging packed with SoC that integrates a Haswell CPU as well as Lynx Point LP PCH chipset inside.
The SoC packaging leads to lower production costs, power footprint and lower TDP, everything that you need in order to drive the SoC price down. We remembered Dave Orton, the former CEO of a company acquired by AMD that went by the name of ATI, explaining the importance of APUs and SoCs. The explanation is rather simple, the more you integrate the cheaper the chip ends up, the fewer pins you have and theoretically you can make more money. This conversation happened in the summer of 2007, roughly a year after AMD acquired ATI and announced its plans to produce Fusion APUs.
Since the top ARM chips such as Qualcomm Snapdragon 800 / 600 and Tegra 4 have multiple cores, chipset elements and graphics all on the same package, it was only natural for Intel to take the same approach with Haswell. Qualcomm, Nvidia and Intel are after the same market, tablets, notebooks, convertibles with a slight advantage that Intel has X86 and the other two don’t.
Let’s not forget that Haswell 1- chip is much bigger than any ARM based top performers, but at the same time Haswell brings a lot more performance. Despite billions of transistors and 22nm SoC design tablets and Ultrabooks based on 1-chip Haswell or Haswell-ULT how some call it, you can expect 8- to 10-hour battery from products based on this Y processor line chips. This is a respectable score for PC like performance and having a scenario design power (SDP) typical expected TDP at 7.5W these products come close to the top ARM performers that have 5+W TDP.
Intel stresses that these chips won’t simply land in tablets and Ultrabooks. It plans to use them in detachable, foldable and similar designs usually represented as the result of an unholy coupling between a notebook and a tablet.
The bad thing is that Y line of tablet, Ultrabook, de-attachable, switchable SoC Haswell chips only comes in Q4 2013 so we are in for a pretty long wait.
Haswell will save your battery, Haswell has connected stand by, it promises higher performance per clock and for some it is important that. It also gets significantly better graphics.
Since Intel makes huge dies, it won’t be a problem to squeeze some L4 (fourth level cache) to boost memory bandwidth and lower the latency in some of its Haswell SKUs. The Haswell variant that is internally known as Crystal Well offers much larger L4 cache.
The size of the cache is not clear, but we heard that there can be up to 64MB cache dedicated for graphics. This does sound a bit too much,
From the mouth of engineers in the Far East, it could be that the L4 cache remains dedicated only for the GPU, but the other independent sources claim that L1, L2, L3 and L4 memory will be shared between CPU and GPU. We will have to look into which of these two theories is right.
Crystal Well is reserved for GT3 based highest end processors from Intel, and we have heard that it remains an exclusive technology for Core i7 processors. You will have to pay up to enjoy it.
L4 cache is nothing new to the GPU world and consoles have been using such cache to make the texture and antialiasing faster on them. Dedicated cache on GPUs has been considered by Nvidia and ATI (even before the 2006 AMD acquisition) for years. The main obstacle was always that the transistor count for GPU cache memory was very high and it would result in a huge chip, something that semiconductor manufacturers tend to avoid.
It will be interesting to see Haswell Crystal Well in action when it launches later this year, but we are certain that we can see a huge performance leap from Intel Ivy Bridge 4000 series graphics.