ARM’s notable success in smartphones and tablets can obscure the fact that most of the chips using its designs are microcontrollers for using the input of sensors. The firm has announced collaboration with Logmein to push its Mbed project with developers that sign up to the Xively Cloud service.
ARM’s Mbed project aims to bring a standard workflow to hardware design in order to help more firms to make better use of the microcontroller technology that already exists. Simon Ford, director of Online Tools at ARM told The INQUIRER that the MBed project is intended to help hardware designers turn microcontrollers into final products.
Logmein and ARM worked on the Xively cloud based rapid prototyping service to offer hardware developers a way to speed up and lower the cost of the development lifecycle. Those developers who sign up for the service will also get a Xively Jumpstart Kit that includes an ARM Mbed prototype module to get started.
Ford said, “You’re trying to build a product, the intelligence you want embedded is critical but it isn’t the only problem you have. If you are trying to make a product, you have a whole raft a problems. [...] We are expanding the Mbed project to look at how do you have an industrial grade platform that is open, free to use and that removes barriers for someone that has this idea to proving a concept all the way to production.”
While ARM and Logmein promote the service as a way to build the much hyped internet of things, it can be used to develop any hardware that makes use of ARM’s extensive range of microcontrollers. With Logmein’s Xively cloud service, the firms are hoping to enable developers to cut the costs associated with hardware design, enabling smaller firms to get into the market.
nVidia’s CEO Jen-Hsun Huang mentioned a concrete reason of Tegra 4 delays during the company’s latest earnings call.
The chip was announced back in January, but Jensen told the investors that Tegra 4 was delayed because of Nvidia’s decision to pull in Grey aka Tegra 4i in for six months. Pulling Tegra 4i in and having it scheduled for Q4 2013 was, claims Jensen, the reason for the three-month delay in Tegra 4 production. On the other hand, we heard that early versions of Tegra 4 were simply getting too hot and frankly we don’t see why Nvidia would delay its flagship SoC for tactical reasons.
Engaging the LTE market as soon as possible has been the main reason for pulling Tegra 4i, claims Jensen. It looks to us that Tegra 4 will be more than three months delayed but we have been promised to see Tegra 4 based devices in Q2 2013, or by the end of June 2013.
Nvidia claims Tegra 4i has many design wins and it should be a very popular chip. Nvidia expects to have partners announcing their devices based on this new LTE based chip in early 2014. Some of them might showcase some devices as early as January, but we would be surprised if we don’t see Tegra 4i devices at the Mobile World Congress next year, that kicks off on February 24th 2014.
Jensen described Tegra 4i as an incredibly well positioned product, saying that “it brings a level of capabilities and features of performance that that segment has just never seen”. The latter half of 2013 will definitely be interesting for Nvidia’s Tegra division and we are looking forward to see the first designs based on this new chip.
As we draw closer to the launch of Intel’s 4th generation Core CPUs, or Haswell, it is no wonder that we are starting to see more leaks and one showing Intel’s Core i7 4770K overclocked to 5GHz at 0.9V certainly drew a lot of attention.
An impressive overclocking achievement was spotted by Ocaholic.ch and shows a CPU-Z validation of Core i7 4770K overclocked to exactly 5005.83MHz at just 0.904V. As far as we can tell, Hyper-threading was disabled and it is not clear if the CPU is actually stable enough to run anything, but in any case, it is still an impressive result, especially at such low voltage.
The rest of the specs include 4GB of DDR3 memory and ASRock’s upcoming Z87 Extreme4 motherboard.
Intel is rather slow when it comes to the adoption of new wireless standards. Most, if not all, notebooks based on Intel platforms today feature 802.11n capable wireless and with the help of a few antennas it can get you between 150 and 450Mbits.
In reality 802.11ac is usually much slower than 150 to 450Mbits but since the middle of last year 802.11ac routers started to show up all around the world. This new standard can get you to 866Mbits and even higher, but Intel has been rather slow to adopt it.
Intel has promised that both Shark Bay notebook and desktop platforms for 2013 will get support for 802.11ac. The card is based on a 2×2 dual band configuration and will support speeds up to 867Mbits per second, in addition, it will support wireless 1080p display, Intel smart connect, Intel Vpro (only with Y and U processors for notebooks) and Bluetooth.
This is the first product based on 802.11ac but we believe that with time Intel will add more choices to its wireless portfolio as 3×3 802.11ac configuration should potentially run even faster. It will be interesting to test this new card in the real world and see if 802.11ac wireless can get you any faster than 802.11n in real life applications.
It’s ancient history now, but once upon a time, if you wanted to play the most recent and most interesting games, you had to get up, leave the house and make your way to an arcade. Games consoles and home computers lived further down the food chain, their owners waiting for often sub-par versions of glorious arcade hits to be released on home systems. The real experience happened in an arcade.
Even to those who experienced that era, it’s a little hard to believe when you look at the sad remnants of their former glory which remain. Even in supposedly arcade-mad Japan, games generally find themselves wedged ignominiously in between gambling machines occupied by middle-aged chain-smokers and UFO Catcher booths promising, but rarely delivering, stuffed toys and sweets for bored teens on dates. In western countries, sad, lonely fighting game machines are just stuffed in where “arcade” owners ran out of fruit machines to install.
The reasons for this change are fundamentally technological. Arcade machines are big, bulky and expensive to move or replace. Once, that meant that they were vastly more powerful than home systems – but the accelerating pace of technological progress turned the size and expense of arcade machines into a liability rather than an advantage. Cheap, rapidly updated computers and consoles (and eventually even phones) first matched and then far outstripped the processing capabilities of big arcade cabinets. Rapid updates in graphics, processing, storage, networking, controls and screen resolutions were comfortably adopted by the home market, the costs buffered by cheap, cheerful hardware and absorbed by the wallets of millions of consumers. Arcade operators, faced with replacing large numbers of huge, expensive systems in order to keep track of such changes, fell behind completely.
Social factors either exacerbated or softened this blow, but these were highly region dependent. In Japan, where small family living spaces have engendered a culture in which many social activities are carried out external to the home, arcades persisted as date spots, as places to hang out with friends and – perhaps most importantly – as a venue for games too large, too noisy or too intrusive to be played in a small family home. In parts of the West, though, social factors intervened to hasten the decline, with a perception of arcades as “seedy” venues (in the grand tradition of pool halls and their ilk) discouraging many potential players, while regions with legalised gambling were quick to drop videogames in favour of more profitable slot machines.
Over the years, there has been talk of an “arcade renaissance” on several occasions, yet each time has ended in disappointment. Even as living spaces in many Western countries (the UK is a particularly notable example) have shrunk dramatically in terms of average size, Western consumers have demonstrated a continued willingness to engage with loud, bulky games. Rock Band and Guitar Hero were hugely successful as home games in the West, where their Japanese equivalents, Konami’s Guitar Freaks or Drum Mania, have acted as sustaining lifeblood for arcade venues. It’s also notable that even as Japanese arcades have innovated and invested, launching extraordinary new games which leverage all sorts of new technologies, from the company’s ultra high-speed broadband networks through to the possibilities of RFID enabled cards, the arcade sector’s health has still declined – a drop-off in footfall, revenue and floor space that’s been slower than in the West, but still isn’t exactly the rude health you might have come to believe from fawning articles about amazing Japanese arcades in the western media.
As such, it’s important to be cautious about any notion of an arcade recovery. Yet if we were to envisage any potential uplift in the fortunes of the out-of-home gaming sector, we can easily say what one key factor would be – just as in the heyday of the arcade, these venues would need to provide games which you simply cannot experience at home. This won’t come about, this time around, through more powerful graphics or processing – the trends in those areas are focused on miniaturisation and cost-efficiency, targeting the ability to put high-end 3D into phones rather than building pricey, bulky, ultra high-end systems. Instead, the focus would have to be on experiences that don’t work at home for reasons of space, budget, intrusiveness – or preferably, a combination of all of the above.
The reason I raise this issue now is because in the past few weeks, most of us will have seen videos or demonstrations of technologies which, although their creators purport to be focused on the home market, clearly fall into these categories. One is Microsoft’s Illumiroom system, which uses Kinect to map a 3D space and then projects imagery matched to that 3D map. It’s a great piece of technology with extraordinary gaming potential. It’s also abjectly unsuited to an ever-increasing number of living rooms around the world. Kinect alone is an impossibility for many players due to the space and room layout it demands; Illumiroom, demanding similar space if not more and intrusively taking over the entire room such that nobody else will be able to use it concurrently with the game being played, is simply not going to work for most people and most homes. Outside the home, though, in a dedicated venue? The potential of the technology is extraordinary, the experiences it could create serving to create a destination for gamers to experience something that just won’t work at home.
The same thought process applies, to some extent, to the Oculus Rift. It’s not that the superb VR headset hardware won’t work at home – of course it will, and it’ll probably only be a few hardware generations before the compromises presently being made in the name of cost are ironed out by technological progress. However, the “full” VR experience – with a custom controller (a gun, perhaps, or full-body motion sensing suite), a multi- directional treadmill, and so on, is simply going to be too expensive for most users – and even if prices collapsed, it’s too big and unwieldy to live in most people’s apartments. Yet the entertainment potential of such a fully-functional setup, running in parallel with a dozen other such suites so that a group of friends can explore a virtual world together, is enormous – and from a commercial perspective, not even all that space-consuming.
Of course, technology is just one factor. Technologies such as these (and I’m sure that others exist which also fall into the trap of “amazing, but it won’t work in my house”) can give a compelling reason for people to engage with out-of-home gaming – but the social factors also have to be right if an arcade renaissance is to be possible. Social factors are trickier, in many ways, than getting the hardware and the software right. Losing the seedy, unwelcoming image of the arcade in some regions will be tough; in others, where arcades have died entirely, the marketing of an entirely new social pursuit would present a major challenge. Getting people to try out something like this might be easy; getting them to see a trip to the VR centre with friends as an entertainment option on par with a trip to the cinema is likely to be much harder.
All the same, the entertainment possibilities opened up by technologies of this kind, which are now reaching a mature, usable stage in their development, ought to create an optimism around arcades and out-of-home gaming that hasn’t been seen for some time. Social or commercial aspects could still pull the rug out from any hope of recovery or renaissance – but the potential certainly exists for new kinds of gaming and interactive entertainment to take their place as key social out-of-home experiences in the coming years.
A few years ago it would have been impossible for Intel to acquire AMD, simply due to regulatory constraints put in place by the FTC and the European Union. Intel had more than a 60 percent of the PC and notebook market, so picking up AMD, a company that has some 20 percent of the market, would make Intel a real monopoly.
In the last two years the iPad, smartphones and ARM based tablets have changed the landscape, eating up Intel’s revenue and market share. It is true that most people, especially professionals and the business crowd, use x86 processors, but this is rapidly changing as home users are happy with emailing, browsing and playing some games on their iPad or other tablets. This puts Intel in a world trouble, as the PC market nosedived by 14 percent last quarter, due to a lack of interest for new devices and upgrade.
Tablets are becoming couch browsing devices, people use their smartphones to read news on the go and sometimes at home. More and more users don’t even touch their notebooks or desktops at home. With ARM staying the dominant instruction set in the phone and tablet space, Intel is facing a serious issue as Apple, Samsung, Qualcomm and Nvidia are all making money on ARM chips.
With this in mind, this would be the main reason for Intel to pick up AMD. AMD would not cost them that much, as Intel still has billions in bank, but with AMD, Intel would gain great graphics, something that the company has been struggling to crack for many years. It would make Intel slightly more competitive, but it would not solve all of its problems.
ARM manufacturers also face challenges, they need to produce more powerful chips and deliver a better user experience in order to win more notebooks and detachable devices, but this is going well with non-Apple based tablets. Apple uses ARM, so in the tablet world ARM is winning this fight, but Qualcomm and Nvidia as two independent chip manufactures could do a much better job at getting popular design wins. The Snapdragon S800 and Tegra 4 will get these two companies a step closer, while Apple will continue making good chips for iPads and iPhones. Let’s not forget about Samsung, as it makes many chips for its phones and tablets.
AMD gained 14 percent on May 1st, and an additional 5.9 percent yesterday, getting its stocks up to $3.41. Back on April 30th, AMD stock was trading at $2.68. In last three days of trading AMD gained 27.24 percent or $0.73 per share, which is a huge leap for a company with a 52-week low of just $1.81
Ubisoft has confirmed that Watch Dogs will arrive on November 19th in North America and November 22nd in Europe. The game is been confirmed for the Xbox 360, Xbox Next, PlayStation 3, PlayStation 4, PC, and Wii U. The release date for the PlayStation 4 version is expected to coincide with the release date of the PlayStation 4 console, so depending on its release date, the release of the PS4 version could be adjusted. (This apparently applies to the Xbox Next, as well.)
We are also being told that the PS3 version of the game will include an additional 60 minutes of exclusive game play. We are not sure if this game play will also be available for those that purchase the PS4 version, but we suspect that it will.
Four special edition versions of the game will be offered. It is not yet clear whether or not they will offer each of these special editions for each of the platforms. More details are expected to follow in the days ahead, but these look like some very nice special editions of the game, with some very nice extras being thrown in.
Intel has announced that it will launch its next generation Haswell processors at Computex.
Intel showed running Haswell silicon to journalists last month at the Game Developers Conference (GDC) in a bid to talk up the upcoming chip’s GPU. Last Friday the firm announced what some already knew and many had already guessed, that it will launch Haswell at Computex in June.
Intel published a blog post on 26 April saying that the fourth generation Core processor known as Haswell would arrive in 3,337,200,000,000,000 nanoseconds, which worked out to just under 39 days. The countdown figure matched perfectly with the start of Computex on 4 June, and confirmed what an Intel insider said that the chip would be launched at Computex.
The fact that Intel is using Computex to launch its next generation chip is not surprising, given that there are few big IT shows during the summer and launching the chip later will not give the firm’s system builder and OEM partners enough time to gear up marketing for the lucrative back to school and holiday buying seasons.
While Intel’s Haswell launch is a big event for the firm, it isn’t the most important. Rather, the firm is expected to launch updated low-power Atom chips that it hopes will help it compete in the tablet market, a market that is growing, as opposed to the PC market that Haswell addresses.
Intel’s decision to launch at Computex means that the late spring computer industry show should be awash with updated notebook and desktop PCs, as well as the firm’s preferred ultrabook branded laptops.
CA Technologies has acquired application programming interface (API) management and security provider Layer 7 Technologies, giving it an inroad into an area of technology boosted by the growth of mobile and cloud.
APIs are designed to let applications talk to each other, so that an e-commerce site can process an online transaction calling up a user’s bank details or a smart meter can connect to a utility system and back to an energy monitoring company. Canadian firm Layer 7, founded in 2002, offers technology that manages these API integrations to check that they are working properly and securely.
According to CA, the acquisition will let customers deploy cloud, mobile and “internet of things” initiatives, accelerate service delivery and govern API activity to enforce SLAs. CA plans to combine the Layer 7 technology with its own identity management and Lisa application delivery suite.
Layer 7 pointed out that there were more than 8,000 public APIs available at the end of 2012, saying “there is a vast library of proprietary components and data that need to be managed and secured from unauthorized access”.
Jacob Lamm, EVP of strategy and corporate development at CA, said the firm is “really really excited” about the Layer 7 deal, which has only just been signed so still has to officially go through. He explained that the technology is a critical part of rounding out CA’s authorization and authentication services.
“Think of the front door as the identity management, you knock on the door, we need to tell if you are who you say you are,” he stated.
“The back door are the applications, the APIs. Now especially with the cloud, with mobility, any application can be connected to hundreds of other services. How do I know they are who they say they are? We need to manage the connections between all those applications. API governance and security, that’s what Layer 7 adds to our security perimeter.”
Terms of the deal were not disclosed.
The Layer 7 acquisition by CA follows hot on the heels of Intel’s purchase of Mashery last week. Mashery also offers developers a way to manage application programming interfaces (APIs). Intel said that the team will report to its Services Division, founded in 2011 in a bid to have a potential revenue stream from devices that don’t use its chips.
ARM posted market-beating first-quarter financial results, thanks to strong demand for its chip designs. The company forecast that annual revenue would be in-line with market expectations.
ARM’s first quarter revenue rose 28 per cent to $170.3 million from $132.5 million a year earlier. Analysts had expected a 20 per cent rise in revenue to $158.8 million. The company made an adjusted pretax profit rose 44 per cent to $89.4 million from $61.9 million a year earlier. Analysts were expecting a 25 per cent jump to $77.6 million.
Chief Executive Warren East said the company has “delivered another quarter of strong revenue and earnings growth, driven by robust licensing and record royalty revenue.” ARMs royalty revenues again outpaced the wider semiconductor industry, riven by market share gains in key end markets including digital TVs and microcontrollers, he said. ARM also continues to benefit from the growth in smartphones and tablets.
This year ARM said it had made an encouraging start with more leading companies choosing to sign up to ARM technology. More than 22 processor licenses were signed in the first quarter ended March 31 from smartphones, mobile computing, digital television and other technology.
In the quarter there were more than 2.6 billion ARM-based chips were shipped, up 35 per cent from a year earlier.
Canonical has touted Ubuntu 13.04′s support for VMware’s ESX and Microsoft’s Hyperv hypervisors.
Canonical’s Ubuntu 13.04 is the latest version of the firm’s popular Ubuntu Linux distribution that is growing in popularity in the server and specifically cloud market. The firm said it has been working with virtualisation vendors VMware and Microsoft to ensure that virtual machines running Ubuntu 13.04 are well supported by the underlying hypervisor, with Canonical telling The INQUIRER that collaboration with Microsoft has been particularly easy.
For some time now Canonical has been touting its Ubuntu distribution as being the most popular choice on Amazon’s EC2 public cloud, and while it has clearly worked hard in that area, the firm is now working on ensuring the operating system works on private clouds that run hypervisors from VMware and Microsoft.
Mark Baker, server product manager at Canonical said the use of KVM based virtualization it is important to support VMware and Microsoft.
Baker said, “Customers ask us ‘does it work?’ and we say yes absolutely it works. Is it supported? If you are running a version of Ubuntu on [VMware] ESX and you have problems, and you have a support contract with Canonical will we help you? Absolutely. Will VMware support it? That wasn’t necessarily clear before, but it is clear now that it is a supported guest platform.”
Baker continued, “We build and test Ubuntu on ESX everyday now.” However he did say that while VMware remains the market share leader, Microsoft’s Hyperv is gaining popularity and the firm is working with Microsoft to ensure that Ubuntu 13.04 has the same level of support on Microsoft’s hypervisor.
Interestingly Baker said that working with Microsoft has been far more pleasant than many Linux fans might think and that working with Microsoft was a pragmatic solution to customers’ needs.
He said, “Go back a few years, I’m not sure you would have seen us and Microsoft collaborating in the way that we are. But we have to be pragmatic really. If customers want to run Ubuntu we don’t want our lack of integration or testing or interoperability to be a limiting factor. We’d much rather they run Ubuntu albeit on Hyperv on Windows 8 Server.
“Actually, we’ve had a fairly easy time collaborating with Microsoft, they are very cooperative.”
Canonical is right to work on ensuring Ubuntu 13.04 plays nicely with VMware and Microsoft hypervisors as KVM currently lacks market share and it would be bad for the Linux community as a whole if one of the most popular distributions provided such a bad experience on established hypervisors that firms simply ignore Linux altogether.
Nvidia’s first Tegra 4 design win is here, apparently, and it doesn’t appear very impressive at all. Tegra 4 is late to the party, so it is a bit short on design wins, to put it mildly.
Now a new ZTE smartphone has been spotted by Chinese bloggers and it seems to be based on Nvidia’s first A15 chip. The ZTE 988 is a phablet, with a 5.7-inch 720p screen. It has 2GB of RAM, a 13-megapixel camera and a 6.9mm thin body. It weighs just 110g, which is pretty surprising. The spec is rather underwhelming, especially in the display department.
However, a grain of salt is advised. It is still unclear whether the phone features a Tegra 4 or a Qualcomm chipset. Also, it is rather baffling to see a 720p screen on a Tegra 4 phablet, it just seems like overkill.
We have already mentioned Intel’s one-chip Haswell platform on several occasions, but we have managed to get a few extra details about this chip. As we have stated many times before 1 chip Haswell has BGA packaging packed with SoC that integrates a Haswell CPU as well as Lynx Point LP PCH chipset inside.
The SoC packaging leads to lower production costs, power footprint and lower TDP, everything that you need in order to drive the SoC price down. We remembered Dave Orton, the former CEO of a company acquired by AMD that went by the name of ATI, explaining the importance of APUs and SoCs. The explanation is rather simple, the more you integrate the cheaper the chip ends up, the fewer pins you have and theoretically you can make more money. This conversation happened in the summer of 2007, roughly a year after AMD acquired ATI and announced its plans to produce Fusion APUs.
Since the top ARM chips such as Qualcomm Snapdragon 800 / 600 and Tegra 4 have multiple cores, chipset elements and graphics all on the same package, it was only natural for Intel to take the same approach with Haswell. Qualcomm, Nvidia and Intel are after the same market, tablets, notebooks, convertibles with a slight advantage that Intel has X86 and the other two don’t.
Let’s not forget that Haswell 1- chip is much bigger than any ARM based top performers, but at the same time Haswell brings a lot more performance. Despite billions of transistors and 22nm SoC design tablets and Ultrabooks based on 1-chip Haswell or Haswell-ULT how some call it, you can expect 8- to 10-hour battery from products based on this Y processor line chips. This is a respectable score for PC like performance and having a scenario design power (SDP) typical expected TDP at 7.5W these products come close to the top ARM performers that have 5+W TDP.
Intel stresses that these chips won’t simply land in tablets and Ultrabooks. It plans to use them in detachable, foldable and similar designs usually represented as the result of an unholy coupling between a notebook and a tablet.
The bad thing is that Y line of tablet, Ultrabook, de-attachable, switchable SoC Haswell chips only comes in Q4 2013 so we are in for a pretty long wait.
Haswell will save your battery, Haswell has connected stand by, it promises higher performance per clock and for some it is important that. It also gets significantly better graphics.
Since Intel makes huge dies, it won’t be a problem to squeeze some L4 (fourth level cache) to boost memory bandwidth and lower the latency in some of its Haswell SKUs. The Haswell variant that is internally known as Crystal Well offers much larger L4 cache.
The size of the cache is not clear, but we heard that there can be up to 64MB cache dedicated for graphics. This does sound a bit too much,
From the mouth of engineers in the Far East, it could be that the L4 cache remains dedicated only for the GPU, but the other independent sources claim that L1, L2, L3 and L4 memory will be shared between CPU and GPU. We will have to look into which of these two theories is right.
Crystal Well is reserved for GT3 based highest end processors from Intel, and we have heard that it remains an exclusive technology for Core i7 processors. You will have to pay up to enjoy it.
L4 cache is nothing new to the GPU world and consoles have been using such cache to make the texture and antialiasing faster on them. Dedicated cache on GPUs has been considered by Nvidia and ATI (even before the 2006 AMD acquisition) for years. The main obstacle was always that the transistor count for GPU cache memory was very high and it would result in a huge chip, something that semiconductor manufacturers tend to avoid.
It will be interesting to see Haswell Crystal Well in action when it launches later this year, but we are certain that we can see a huge performance leap from Intel Ivy Bridge 4000 series graphics.
HP has said that its Moonshot servers are more open than AMD’s Seamicro servers that kicked off the microserver market in 2010.
AMD’s purchase of Seamicro last year highlighted the potential of the microserver market just as Intel was talking up Atom based servers. With HP launching its second generation Moonshot microserver system earlier this week, the parallels to AMD’s Seamicro products have led HP to claim that its Moonshot servers are “more open”.
David Chalmers, CTO of HP’s EMEA Enterprise Group gave credit to Seamicro for getting its servers out the door first but said the firm didn’t work with as many partners. Chalmers said, “I would argue it’s more closed.
“There’s none of the ecosystem for example, there’s no ability to have lots of different partners who are contributing to the piece. They if you like are the classic little start-up with a great idea and got it to market, but it is in a very narrow fixed box.”
Chalmers continued by saying that HP’s Moonshot is a more open system because customers can choose from multiple chip vendors. He said, “We came at this with a very different point view, we have engineered [Moonshot] to be a much more open platform, so multiple types of silicon, multiple partners involved, multiple people contributing to the [intellectual property], to what we think will be much richer, much more effective solution.”
AMD’s Seamicro servers make use of both Intel Atom and AMD Opteron chips, with the firm saying that it is evaluating Intel’s latest Avoton Atom chips. What makes things interesting is that AMD will supply HP with Moonshot cartridges and was present at the Moonshot launch with a prototype cartridge sporting its first server system on chip (SoC) codenamed Kyoto, and is expected to ship the cartridge in the second half of 2013.
Given that AMD is working on ARM server chips, it isn’t a stretch to think that its Seamicro division will have access to both x86 chips and ARM chips in the near future. That will make AMD’s Seamicro servers even closer rivals to HP’s Moonshot units, but will have the advantage of having been in the market for three years.
HP’s Moonshot server launch brought together Intel and several high profile ARM vendors such as AMD, Calxeda and Texas Instruments. However HP all but buried recognition of the ARM vendors under the announcements that it lead with Intel’s year-old Centerton Atom chip and will have Avoton Atom based servers available in the second half of 2013.