In the second half of 2013 Intel was forced to deal with at least six different desktop processor groups. On the top of the food chain Intel has Ivy Bridge E, Sandy Bridge E followed by, Haswell LGA 1150 and Ivy Bridge 1150 processors. The end carries the remains of Sandy Bridge processors, Celeron BGA and Bay Trail Atom processors.
As you can imagine Ivy Bridge E, Sandy Bridge E both based on LGA 2011 socket occupy some two percent of total Intel socket market while Haswell LGA 1150 reaches almost 30 percent of total shipments by socket in 2H 2013.
The most dominant products were naturally Ivy Bridge LGA 1155 parts that accounted for more than sixty percent of total shipments. Sandy Bridge 32nm processors in Socket 1155 are taking three percent of total shipments in 2H 2013 while Celeron BGA / Bay Trail D and old Atom based on Clower Trail 32nm should occupy some 5 percent.
In 1H 2014 Ivy Bridge E will eat the Sandy Bridge E market taking most of the pie for itself. Haswell and Haswell refresh, both LGA 1150 parts, should occupy close to 55 percent of the market while Ivy Bridge is doomed to shrink to 40 percent. Sandy Bridge LGA 1155 will be in some one to two percent of socketed processors that will ship in 1H 2014, while Celeron BGA and Bay Trail D (same thing under different brand) will grow into the Cedar View D market and conquer the rest of the low-end.
Both Haswell refresh and Bay Trail D should continue growing in 2H 2014 according to Intel’s desktop transition guide.
AMD has announced that its proprietary Mantle graphics API is attracting more interest as some big names sign up. Rebellion Entertainment has entered the game with its Asura engine and officially adopted Mantle for their upcoming Sniper Elite V3.
It looks like the first title that will be supported by Mantle will be Sniper Elite 3. So far no one is saying what advantage there will be for Asura to run on Mantle, it seems likely that it will give boosts in performance as well as enhanced graphics quality. Chris Kingsley chief technology officer and co-founder of Rebellion Entertainment said in a press release that his studio was pushing technology as far as it could.
“We are excited about the possibilities that Mantle brings to PC gaming and the industry as a whole. We believe that supporting Mantle will enable us to stay on the bleeding edge of PC gaming and ensure that we don’t leave any performance on the table when it comes to offering gamers amazing experiences,” he said.
Mantle, a cross-platform application programming interface for windows designed specifically for graphics processing units based on graphics core next (GCN) architecture, presenting a deeper level of hardware optimisation. Mantle is supposed to bypass bottlenecks in modern PC/API architectures and enables nine times more draw calls per second than DirectX and OpenGL thanks to lower CPU overhead, AMD claims.
Researchers have made a quantum leap in the search for ultra-fast computing.
Scientist at Simon Fraser University managed to keep information in a quantum memory state for 39 minutes, smashing a hypothetical world record.
Previous attempts yielded results of under 30 seconds at room temperature and just under three minutes in cryogenic conditions.
The global race to harness the power of qubits has high stakes – the ability to create computers capable of calculating many times faster. Qubits are able to exist simultaneously in a superimposed state of ’0′ and ’1′.
This experiment involved a new type of silicon that could, scientists believe, be the secret of creating long term memory in quantum systems.
Speaking to Sky News, co-author of the paper Stephanie Simmons of Oxford University said, “Thirty-nine minutes may not seem very long but as it only takes one-hundred-thousandth of a second to flip the nuclear spin of a phosphorus ion – the type of operation used to run quantum calculations – in theory over two million operations could be applied in the time it takes for the superposition to naturally decay by one percent.”
The next stage will be to find a way to manipulate the qubits to talk to each other in a meaningful way so that information can be passed between them during their short, glorious lives.
Although there is a significant amount of research to come before quantum computing provides an effective alternative to traditional methods, this has been a huge leap forward for the concept, and it’s widely expected that eventually the next leap will be the leap home.
Although its new Mantle API was announced back at the Hawaii launch bash, AMD did not share too many details. As far as technology goes, Mantle is more or less straightforward; it’s a thin layer on top of GCN hardware.
The general idea is that Mantle could drastically reduce API overhead in certain scenarios, allowing developers to tap the full potential of GCN-based GPUs. One of the ways it does this is by batching draw calls, grabbing more polygons to render without placing much strain on the CPU.
It also allows developers to streamline games for optimum performance, by queuing and distributing threads on the CPU and GPU, thereby harnessing more computing power from both. This process could be improved upon with a bit of help from HSA, which means upcoming AMD APUs could gain a bit of gaming muscle with no added silicon.
Then there’s parallelism – Mantle can use multiple CPU cores more efficiently than DirectX and OpenGL, reducing CPU workloads. It gets even better with multi-GPU support, as Mantle can basically “see” multiple GCN-based GPUs as a single GPU, improving load balancing and eliminating microstuttering in the process. It’s also interesting from an APU perspective, as it could potentially lead to even greater performance gains for users with AMD APUs and discrete graphics, but we’ll just have to wait and see.
So how much of a performance boost are we looking at? Well AMD claims 20 to 50 percent, which sounds very impressive indeed, if not too optimistic – so we’re probably looking at something closer to 20 percent. However, we’re still months away from seeing Mantle in action, which leads us to AMD’s next problem – adoption. Mantle support is coming to BF4 next month and it should give us an idea of what to expect. DICE says Mantle will be supported by no fewer than 15 Frostbite based games. Apart from DICE, AMD says it has a few other developers lined up.
The trouble is multi-vendor support. Oddly enough Frostbite head honcho Johan Andersson said Mantle is actually not tied to AMD’s GCN architecture and it’s forward compatible. This obviously means AMD’s post-GCN GPUs will support it, but it also means Nvidia could embrace it as well, as DICE claims “most Mantle functionality can be supported on today’s modern GPUs” – unless DICE thinks Nvidia doesn’t make modern GPUs, this more or less means Nvidia GPUs could support Mantle sometime in the future.
Cash strapped chipmaker AMD has had to apply for a short term loan to help the company slow its financial decline. The outfit has raised half-billion dollar line of finance from a group of lenders, with Bank of America acting as agent.
All this is happening as AMD struggles with the economic downturn and a decline in PC sales. The outfit is good for the cash. After all it is focusing on game consoles and associated royalties, which at its fiscal third-quarter earnings saw the business unit increase in revenue by 110 percent on the previous quarter, and 96 percent year-over-year.
AMD said the proceeds of the five-year secured revolving line of credit, ending November 2018, which retires may be used for general corporate services, such as working capital needs.
AMD has confirmed what we knew all along. Although it might announce the first Kaveri products later this year, the first desktop parts will be available on January 14 2014. Although many were hoping to see the first Kaveri chips by the end of the year, having them just two weeks into 2014 doesn’t really make much of a difference.
So what can we expect from the first batch of Kaveri parts?
One part revealed during the APU 13 presentation was the A10-7850K. It appears to be a 3.7GHz quad-core with 512 Radeon cores (R7-series GPU). The theoretical performance calculated by AMD for this particular part is 856 GFLOPs.
However, the trouble with Kaveri is that we still don’t know the impact of HUMA, HSA and Mantle on actual real world performance. HUMA will let the chip share memory between the GPU and CPU, although GDDR5 support is lacking, shattering the wet dreams of many a fanboy. HSA and Mantle could unlock even more performance.
“Kaveri can perform well above its class because of these technologies,” an AMD spokesman told EE Times.
So far AMD is confirming Mantle support in four upcoming games. Mantle could practically allow AMD APUs to do more with less silicon, boosting their price/performance ratio. Of course, more developers need to embrace Mantle in order to give new AMD APUs a competitive edge.
Ben Widawsky of Intel’s Open-Source Technology Center published the initial kernel driver support for Broadwell.
Intel is pushing the preliminary hardware support into Linux 3.13 and hopes to stabilise it and push additional features for Linux 3.14. The big idea is that Broadwell support in Linux 3.13, should be as good as Haswell. This means that Fedora 21 and Ubuntu 14.04 LTS — and other H1’2014 Linux distributions should be able to support the next-generation Intel hardware.
Intel is doing better than AMD at getting its chips into the Open Source arena. Stable open-source support only arrived post-launch for major new GPU introductions. Intel has done better than Nvidia too where the open-source support is still largely left up to the reverse-engineering Nouveau community.
Over the weekend 62 patches to the Linux kernel were sent for enabling Broadwell support by Intel’s Direct Rendering Manager driver. Intel is expected to release the libdrm and intel-gpu-tools support in the coming days. Stage two will involve the i965 Mesa DRI driver changes plus the xf86-video-intel DDX driver for X.Org support.
At the moment Samsung is building 3GB LPDDR3 modules and they are starting to show up in some of its products. However, Samsung’s 3GB modules use six 20nm-class 4Gb chips, so 3GB modules based on SK Hynix 6Gb chips could get there with just four chips, ending up somewhat smaller and cheaper.
The new SK Hynix chip is rated at 1,866 Mbps and it can handle a maximum of 7.4GB/s in single channel mode. In dual channel mode the chip can hit 14.8GB/s. The chips are already sampling to potential customers.
Samsung is also working on 6Gb LPDDR3 chips that should end up in its 3GB modules and its high-end mobile products.
Earlier this year Samsung launched the first commercially available big.LITTLE ARM SoC and unleashed its marketing machine, hyping the benefits of ARM’s big.LITTLE configuration and octa-cores in general. Qualcomm wouldn’t stand for it. Qualcomm’s outspoken CMO Anand Chandrasekher took a few less than diplomatic jabs at Samsung, saying octa-cores are “dumb” and that Qualcomm has no intention of building them, since its engineers “don’t do dumb things”.
From a purely technical perspective Chandrasekher was right and most technologists agreed with him. Qualcomm and Apple have made it abundantly clear that optimized custom cores can easily beat the brute force approach of smaller chip designers who use off-the-shelf ARM tech. However, there is a very good reason some companies are still betting on the “more is better” approach and it all comes down to money rather than silicon.
Designing custom cores is simply not an option for smaller outfits. They lack the resources and know-how to pull it off, so custom cores are reserved for big players, at the top of ARM’s licensing pyramid. On the face of it, this doesn’t leave smaller chipmakers that many options – they have to use what they’ve got, and they’ve got IP for reference ARM cores which they can’t play around with. However, operating under such constraints is forcing them to look for alternative ways of coming up with competitive designs.
One of the ways of getting around these limitations was demonstrated by MediaTek, in the form of their new Cortex A7-based octa-core, the MT6592. Although it’s an octa-core, it is practically a mid range chip, but in some benchmarks like Antutu it comes close to Qualcomm’s Snapdragon 600, but MediaTek’s chip has a much less potent GPU and since it’s an octa-core its real world performance won’t be as good as the benchmarks would have us believe, as most apps can’t put the additional cores to good use.
So why does it make perfect sense then? Well, ARM has a peculiar and complicated IP licensing model. In case you’re interested in the finer points of ARM’s business model and how it could apply to cheap octa-cores, you can check out this extensive report on Anandtech.
When is a blink not a natural blink? For Google the question has such ramifications that it has devoted a supercomputer to solving the puzzle.
Slashgear reports that the internet giant is using its $10 million quantum computer to find out how products like Google Glass can differentiate between a natural blink and a deliberate blink used to trigger functionality.
The supercomputer based at Google’s Quantum Artificial Intelligence Lab is a joint venture with NASA and is being used to refine the algorithms used for new forms of control such as blinking. The supercomputer uses D-Wave chips kept at as near to absolute zero as possible, which makes it somewhat impractical for everyday wear but amazingly fast at solving brainteasers.
A Redditor reported earlier this year that Google Glass is capable of taking pictures by responding to blinking, however the feature is disabled in the software code as the technology had not advanced enough to differentiate between natural impulse and intentional request.
It is easy to see the potential of blink control. Imagine being able to capture your life as you live it, exactly the way you see it, without anyone ever having to stop and ask people to say “cheese”.
Google Glass is due for commercial release next year but for the many beta testers and developers who already have one this research could lead to an even richer seam of touchless functionality.
If nothing else you can almost guarantee that Q will have one ready for Daniel Craig’s next James Bond outing.
Last week Nvidia responded to AMD’s Hawaii launch by offering discounts on a few of mid- to low-end cards. It is also said to be working on a couple of revamped GTX 750 and GTX 760 products designed to go toe to toe with AMD’s R7 series and R9 270X products. Following the first round of its cuts, Nvidia clearly stated that it had no intentions of extending the price cuts to other cards, namely the more powerful GTX 700 series products.
However, Digitimes reports that Nvidia could be forced into another round of price cuts as early as November. Of course, Digitimes’ sources have a habit of stating the obvious, as the rumoured price cut would follow R290 and R290X availability by a couple of weeks.
It would also make the GTX 760, 770 and possibly 780 a tad more competitive going into the holiday season. Needless to say, Nvidia can’t make its move until AMD announces official R290 and R290X pricing. This should happen quite soon and multiple sources are already telling us that the R290X will end up with a $699 price tag.
There is a caveat though, the price is still not official and AMD could change it at the last possible moment, further limiting Nvidia’s ability to plan a possibly price cut. Furthermore the R290 price remains a mystery and if it’s priced right it could spell more trouble for Nvidia than the R290X.
There are still plenty of unknowns and although Nvidia isn’t talking about any GTX 700 price cuts just yet, it is evident that it will have to do something over the next couple of months.
Red Hat has commissioned some researchers to blow its trumpet for it, revealing that the Total Cost of Ownership (TCO) of an IT infrastructure based on Red Hat Enterprise Linux is cheaper than one based on Microsoft Windows Server.
The figures weren’t just minor improvements, with 34 percent TCO savings that meant “superior operational efficiencies and could support more users and that the superior scale and density of the Red Hat Enterprise Linux platforms translated directly into lower overall infrastructure costs”.
While Red Hat wouldn’t have released the report if it wasn’t good news for it, we’re interested to know what firm carried out the research, as having read the white paper we only found reference to “a premier global market intelligence firm”. Why didn’t it want to be named?
The study, which is based on a range of setups in different comparable industries and locales around the world, shows significiant cost savings. The topline figures show a 29 percent savings in infrastructure costs, a 41 percent savings in IT staffing costs, and a staggering 54 percent improvement in productivity.
Although we know that this is a glorified sales brochure, these figures make for interesting reading at a time when budgets are being squeezed everywhere. Red Hat believes that this level of TCO savings will enable IT managers to innovate and move forward, rather than simply “keep the lights on”.
The next generation of Radeons is about to launch, but so far AMD has done a rather good job at keeping the details away from prying eyes. We got some info on the new branding scheme, some vague performance claims and that’s it – very little in the way of hard tech facts.
Now AMD is shedding more light on its new GPUs. In an interview with Forbes, VP and General Manager of AMD’s Graphics Business Unit, Matt Skynner, said the chips are coming in Q4, which we already knew, but he also confirmed what we reported weeks ago. The cards should end up a bit cheaper than many people had expected.
“We’re not targeting a $999 single GPU solution like our competition because we believe not a lot of people have that $999,” he said. “We normally address what we call the ultra-enthusiast segment with a dual-GPU offering like the 7990. So this next-generation line is targeting more of the enthusiast market versus the ultra-enthusiast one.”
Basically this means AMD is taking a more frugal approach, as it will not focus on the ultra-high-end market. Bang for buck, that’s what AMD is going for.
“It’s also extremely efficient. [Nvidia's Kepler] GK110 is nearly 30% bigger from a die size point of view. We believe we have the best performance for the die size for the enthusiast GPU,” he added.
This is very encouraging news for end-users. The Hawaii die should end up 10 to 15 percent bigger than Tahiti, yet AMD reckons it can take on much bigger GK110 products. A 30 percent smaller die means higher margins, yields and more room to come up with competitive prices. In addition, it should result in a significant improvement in performance-per-watt, which means most users won’t have to upgrade their PSUs to get a significant performance boost, especially those upgrading from 40nm products.
Kaveri is coming in a few months, but before it ships AMD will apparently spice up the Richland line-up with a few low-power parts.
CPU World has come across an interesting listing, which points to two new 45W chips, the A8-6500T and the A10-6700T. Both are quads with 4MB of cache. The A8-6500T is clocked at 2.1GHz and can hit 3.1GHz on Turbo, while the A10-6700T’s base clock is 2.5GHz and it maxes out at 3500MHz.
The prices are $108 and $155 for the A8 and A10 respectively, which doesn’t sound too bad although they are still significantly pricier than regular FM2 parts.
Intel has written a check for the Spanish artificial intelligence technology startup Indisys.
The outfit focuses on natural language recognition and the deal is worth $26 million. It follows Intel’s recent acquisition of Omek, an Israeli startup with specialties in gesture-based interfaces. Indisys employees have joined Intel already. Apparently the deal was signed on May 31 and the deal has been completed.
Intel would not confirm how they are using the tech: “Indisys has a deep background in computational linguistics, artificial intelligence, cognitive science, and machine learning. We are not disclosing any details about how Intel might use the Indisys technologies at this time.”