Chip designer ARM reported a 36 per cent rise in first-quarter net profit amid strong demand for its technology.
The British company said that expects 2015 revenue to meet the expectations of the cocaine nose jobs of Wall Street.
ARM recorded net profit of $126.7 million for the three months to March 31 and revenue rose 22 percent.
Shares in ARM, which makes money by licensing its designs to chip makers, then collecting royalty revenue when the chips ship, were up by more than 5 per cent on the back of the news.
Processor-royalty revenue in dollar terms, a much-watched figure, rose 31 per cent on the year, the company said, adding that it has signed 30 processor licenses for a broad range of applications.
ARM CEO Simon Segars said: As the world becomes more digital and more connected, we continue to see an increase in the demand for ARM’s smart and energy-efficient technology, which is driving both our licensing and royalty revenues.@
Processor-licensing revenue was down 2 per cent in the quarter, which was in line with expectations following strong growth previously. Chief Financial Officer Tim Score told journalists he expects it to grow in future quarters.
Aside from smartphones and tablets, ARM said it is also seeing demand for its processors to be used for servers and networking and for the “Internet of Things”, a term used for the growing tendency for more items to be wirelessly connected.
ARM expects to benefit from the growth of the Internet of Things in areas such as health and in cars, Score said.
We recently showed you a new 16 Zen core next generation processor with Greenland integrated graphics and DDR4 support.
This part definitely sounds interesting but we got an update on the 2016 Opteron server market parts. The next generation Opteron won’t have an integrated graphics part but it will have up to 32 Zen x86 cores with 64-thread support. Unlike the highest end compute HSA part that comes with Greenland HBM graphics, the next generation Opteron doesn’t have any integrated graphics. The Opteron needs all the silicon space for the L2, L3 cache as well as its Zen x86 cores.
Just like the 16 Zen core high performance market APU, each core has 512KB of L2 cache and four processors share 8MB L3 cache. The highest end part will come with eight clusters of 4 cores and if you do the math this server oriented CPU will come with 64GB of L2 cache and 16MB of L2 cache for its CPU cores.
A few other notable features for the next generation server parts include a new platform security processor that enables secure boot and crypto coprocessor. The next generation Opteron has eight DDR4 memory channels capable of handling 256GB per channel. The chipset supports PCIe Gen 3 SATA, 4x10GbE Gig Ethernet and Sever controller HUB. Of course, there will be a SMP, dual socket version.
The next generation Opteron will have 32 CPU cores in its highest end iteration, and we expect some Stock Keeping Units (SKUs) with fewer cores than that for inexpensive solutions.
In case AMD comes to market with this part on schedule, and if the Zen core ends up performing as expected, Intel might finally get some competition. Let’s just hope for AMD’s sake that this server CPU is coming in 2016, sooner rather than later.
We can only on possible Zen-based FX parts for high-end desktops, or the manufacturing process for Zen chips, but at this point we cannot confirm FX parts are coming, and whether or not they will be manufactured in 14nm.
MediaTek is working on two new tablet SoCs and one of them is rumored to be a $5 design.
The MT8735 looks like a tablet version of Mediatek’s smartphone SoCs based on ARM’s Cortex-A53 core. The chip can also handle LTE (FDD and TDD), along with 3G and dual-band WiFi. This means it should end up in affordable data-enabled tablets. There’s no word on the clocks or GPU.
The MT8163 is supposed to be the company’s entry-level tablet part. Priced at around $5, the chip does not appear to feature a modem – it only has WiFi and Bluetooth on board. GPS is still there, but that’s about it.
Once again, details are sketchy so we don’t know much about performance. However, this is an entry-level part, so we don’t expect miracles. It will have to slug it out with Alwinner’s $5 tablet SoC, which was announced a couple of months ago
According to a slide published by Mobile Dad, the MT8753 will be available later this month, but we have no timeframe for the MT8163.
But there’s nothing to see here as far as Torvalds is concerned. It’s just another day in the office. And all this in “Back To The Future II” year, as well.
Meanwhile under the bonnet, the community are already slaving away on Linux 4.1 which is expected to be a far more extensive release, with 100 code changes already committed within hours of Torvalds announcement of 4.0.
But there is already some discord in the ranks, with concerns that some of the changes to 4.1 will be damaging to the x86 compatibility of the kernel. But let’s let them sort that out amongst themselves.
After all, an anti-troll dispute resolution code was recently added to the Linux kernel in an effort to stop some of the more outspoken trolling that takes place, not least from Torvalds himself, according to some members of the community.
Moore’s Law will be more relevant in the 20 years to come than it was in the past 50 as the Internet of Things (IoT) creeps into our lives, Intel has predicted.
The chip maker is marking the upcoming 50th anniversary of Moore’s Law on 19 April by asserting that the best is yet to come, and that the law will become more relevant in the next two decades as everyday objects become smaller, smarter and connected.
Moore’s Law has long been touted as responsible for most of the advances in the digital age, from personal computers to supercomputers, despite Intel admitting in the past that it wasn’t enough.
Named after Gordon Moore, co-founder of Intel and Fairchild Semiconductor, Moore’s Law is the observation that the number of transistors in a dense integrated circuit will double approximately every two years.
Moore wrote a paper in 1965 describing a doubling every year in the number of components per integrated circuit. He revised the forecast in 1975, doubling the time to two years, and his prediction has proved accurate.
The law now is used in the semiconductor industry to guide long-term planning and to set targets for research and development.
Many digital electronic devices and manufacturing developments are strongly linked to Moore’s Law, whether it’s microprocessor prices, memory capacity or sensors, all improving at roughly the same rate.
More recently, Intel announced the development of 3D NAND memory, which the company said was guided by Moore’s Law.
Intel senior fellow Mark Bohr said on a recent press call that, while Moore’s Law has been going strong for 50 years, he doesn’t see it slowing down, adding that Moore himself didn’t realise it would hold true for 50 years. Rivals such as AMD have also had their doubts.
“[Moore] thought it would push electronics into new spaces but didn’t realise how profound this would be, for example, the coming of the internet,” said Bohr.
“If you’re 20-something [the law] might seem somewhat remote and irrelevant to you, but it will be more relevant in the next 20 years than it was in the past 50, and may even dwarf this importance.
“We can see about 10 years ahead, so our research group has identified some promising options [for 7nm and 5nm] not yet fully developed, but we think we can continue Moore’s Law for at least another 10 years.”
Intel believes that upcoming tech will be so commonplace that it won’t even be a ‘thing’ anymore. It will “disappear” into all the places we inhabit and into clothing, into ingestible devices that improve our health, for example, and “it will just become part of our surroundings” without us even noticing it.
“We are moving to the last squares in the chess board, shrinking tech and making it more power efficient meaning it can go into everything around us,” said Bohr.
The Intel fellow describes the law as a positive move forward, but he also believes that we need to have a hard think about where we want to place it once products become smart as they can become targets for digital attacks.
“Once you put intelligence in every object round you, the digital becomes physical. [For example] if your toaster becomes connected and gets a virus it’s an issue, but not so important as if your car does,” he said.
“We have to think how we secure these endpoints and make sure security and privacy are considered upfront and built into everything we deploy.”
Bohr explained that continuing Moore’s Law isn’t just a matter of making chips smaller, as the technology industry has continually to innovate device structures to ensure that it continues.
“Moore’s Law is exponential and you haven’t seen anything yet. The best is yet to come. I’m glad to hand off to the next generation entering the workforce; to create new exciting experiences, products and services to affect the lives of billions of people on the planet,” added Bohr.
“Moore’s Law is the North Star guiding Intel. It is the driving force for the people working at Intel to continue the path of Gordon’s vision, and will help enable emerging generations of inventors, entrepreneurs and leaders to re-imagine the future.”
Intel has been publishing more information about its Knight’s Landing Xeon Phi (co)processors.
Intel has given WCCF Tech an Intel produced PDF which was released to provide supplementary info for the 2015 Intel Developer Forum (IDF).
The document outlines some spectacularly beefy processors Intel is going to produce as part of its professional Xeon Phi range.
The document, which is short on car chases and scantily clad women tells the story of a 72 Silvermont core Intel Xeon Phi coprocessor.
The coprocessor supports 6 channels of DDR4 2400 up to 384GB and can have up to 16GB of HBM on board. It supports 36 PCIe Gen 3 lanes. Intel’s testing put the Knights Landing processors and coprocessors are up to three times faster in single threaded performance and up to three times more power efficient.
Knights Landing chips are supposed to be the future of Intel’s enterprise architecture for high performance parallel computing. Much of its success will depend properly written software.
Intel thinks that its Xeon Phi coprocessors can compete against the GPU-based parallel processing solutions from the likes of Nvidia and AMD.
A rumor fresh out of Korea suggests Nvidia might be tapping Samsung as a GPU foundry, but there is a catch.
The news comes from Korea Times, which quoted a source familiar with the matter. The source told the paper that the deal involved Nvidia GPUs, but it was a small contract.
GPUs on 14nm? Something doesn’t add up
If you are sceptical, don’t worry – so are we. While Nvidia is expected to use Samsung for its upcoming Tegra SoCs, this is the first time we heard it could also try using Samsung’s and Globalfoundries’ FinFET nodes for GPUs.
This would obviously place Nvidia in an awkward situation, as it would basically be using an AMD spinoff to build its chips.
There is another problem with the report. The source claims the deal is valued at “a few million dollars”, which would be barely enough to cover the cost of a single tape-out. In fact, it might not be enough at all. The cost of taping out FinFET chips is relatively high, as these are cutting edge nodes.
Tegras or GPUs?
We doubt Nvidia will ditch TSMC for Samsung, at least as far as GPUs are concerned.
The most logical explanation would be that Nvidia has inked a deal with Samsung to tape-out Tegra chips rather than GPUs. The source may have simply mixed them up, that would explain everything.
Still, there is always a chance Nvidia is looking at alternative nodes for its GPUs, but we just don’t see it happening, at least not yet.
Intel unveiled the Atom x3 chip ahead of this year’s Mobile World Congress, and revealed a version of the processor designed specifically for IoT devices at its Developer Forum event in Shenzen, China this week alongside a smartphone version that will start shipping later this year.
The Atom x3 IoT processor comes with 3G and LTE connectivity, and an extended temperature range for extreme weather conditions making it suitable for devices such as outdoor weather sensors.
Intel’s Atom x3 IoT chip will be made available to developers in the second half of the year, suggesting that devices are not likely to arrive until 2016, and will arrive with support for Android and Linux.
Intel CEO Brian Krzanich said: “Intel remains focused on delivering leadership products and technologies in traditional areas of computing, while also investing in new areas and entrepreneurs – students, makers and developers – to find and fuel future generations of innovation with China.”
That isn’t all Intel has planned for IoT, as the firm recently announced plans to bring payment services to connected devices.
The firm has partnered with Ingenico to include mobile payment capabilities in a wide array of connected devices for the IoT, including intelligent vending machines, kiosks and digital signs.
Intel is clearly going big on the IoT, but a roundtable The INQUIRER held with the firm last year highlighted the complications that businesses could face when entering the market.
Martin King, head of IT services at Ealing, Hammersmith and West London College, said that, first of all, the perception that the IoT is all ‘hype’ needs to be overcome.
“I imagine that, while it could be hype, it’s up to the industry to make it happen,” he said.
“There’s a massive market opportunity there, and I believe the industry will be keen to make it happen and we probably won’t really notice it until it’s actually happening.”
Dr Will Venters, an assistant professor in information systems at the London School of Economics, argued that security concerns will be the IoT’s biggest problem.
“The security argument is always put forward, but there’s a value argument that goes alongside that: maybe you want data in your sensors, but you don’t want the risk of the data on the sensor.”
It looks like Intel has managed to squeeze Iris graphics inside 15W Skylake processors.
This is a huge achievement, considering that only 28W TDP Broadwell mobile processors currently support Iris 6100 graphics. The fastest of them is the Intel Core i7-5557U, and this dual-core with four threads clocked at 3.1GHz base frequency and maximum turbo frequency of 3.4GHz is the fastest Intel processor for notebooks to support Iris graphics. This is a 28W TDP processor with a configurable TDP, down to 23W.
Iris graphics 6100 are used in three additional SKUs, the Core i5 5287U, Core i5 5257U and Core i3 5157U. They are all dual-core processors with four threads and a 28W TDP.
We don’t know much about upcoming Skylake parts, so we don’t know the SKUs, but Intel has communicated to its partners that Iris graphics on 15W products will be available starting with Skylake.
At the highest end, the Core i7 5650U Skylake-U based replacement will come with Iris graphics. The dual-core Core i5 5650U works at 2.2GHz and can hit 3.1GHz on turbo, all while staying in the 15W envelope.
The Core i7 5650 comes with Intel HD Graphics 6000, while the next generation Core i7, Core i5 processors with 15W will have the Iris-grade graphics. Intel will launch four Skylake Iris capable 15W processors in Q4 2015, and will follow up with an additional four in Q1 2016.
According to Intel’s official data at Anandtech, the Core i7 5557U with 48 EUs at 1100 MHz can score 844.8 32b FP GFLOPs, 211.2 64b DP GFLOPS or 422.4 32b INT GFLOPs. This is not bad score, but considering that even Geforce GTX 850M with its 1198.1 Gigaflops outperforms Intel’s Iris 6100, you cannot really expect that Iris Pro in 15W notebook processors will replace discrete mobile GPUs anytime soon.
It has apparently has taped out multiple 14nm designs and is tweaking its equipment using a lead product at the moment. The company is on track to start high-volume shipments of 14nm chips this year.
Jason Gorss, a spokesman for Global Foundries said that the outfit’s 14nm FinFET technology is maturing and on schedule at our Fab 8 facility in Malta, New York,.
“The early version (14LPE) is qualified in our fab and our lead product is yielding in double digits. Since 2014, we have taped multiple products and testchips and are seeing rapid progress, in yield and maturity, for volume shipments in 2015.”
The comment follows a statement from Mubadala Development last week week which claimed htat Global Foundries had begun ramping manufacturing of 14nm chips for customers.
Mubadala, which owns GlobalFoundries, did not provide any details. Even though production is currently not in high volume, it is clear that GlobalFoundries ships certain chips to clients.
It is not clear what 14nm chips GlobalFounfries produces at present, but it is highly likely that the company makes Samsung Exynos 7420 application processors for its process tech partner.
Another early partner of GlobalFoundries with its 14nm FinFET production could be Apple, which is expected to use Samsung’s 14nm process tech to make its upcoming A9 system-on-chip.
Global Foundries licensed Samsung’s 14LPE (low-power early) and 14LPP (low-power plus) process technologies last year.
This process uses FinFET transistors and rely on back-end-of-line (BEOL) interconnects of 20nm manufacturing technology. The14nm FinFET transistors allow a performance boost for chips by 20 per cent at the same power or cut power consumption by 35 per cent without decreasing performance or complexity.
An upcoming MediaTek SoC has been spotted in GFXbench and this tablet-oriented chip has created a lot of speculation thanks to the choice of GPU.
The Cortex-A53 based MediaTek MT8163 was apparently tested on a dev board with 2GB of RAM and the benchmark failed to identify the GPU. GFXbench identified the GPU as a part coming from “MediaTek Inc. Sapphire-lit”.
Spinning up the rumour mill
This is where the speculation starts, as many punters associated the GPU with AMD, and the presence of the word “Sapphire” also prompted some to conclude that AMD’s leading GPU add-in-board partner had something to do with it.
The Sapphire word association doesn’t look like anything other than clutching at straws, because it’s highly unlikely that an AIB would have much to do with the process of licensing AMD IP for mobile graphics.
However, this does not necessarily mean that we are not looking at a GPU that doesn’t have anything to do with AMD. The fact that MediaTek’s name is on it is perhaps more important, because it suggests an in-house design. Whether or not the part is indeed an in-house design, and whether it features some AMD technology, is still up for debate.
Why would MediaTek need AMD to begin with?
MediaTek relies on ARM Mali GPUs, although it uses Imagination GPUs on some designs. So where does AMD fit into all this?
As we reported last month, the companies have been cooperating on the SoC graphics front for a while, but they are tight lipped about the scope of their cooperation.
MediaTek is a supporter of HSA and a founding member of the HSA Foundation, but this doesn’t prove much, either, since the list of founding members includes ARM, Imagination, Texas Instruments, Samsung and Qualcomm.
Using AMD technology on SoCs would have to be a long-term strategy, built around the concept of using AMD IP to boost overall SoC performance rather than just GPU performance. This is why we do not expect to see the fruits of their cooperation in commercial products anytime soon.
Improved compute performance is one of the reasons MediaTek may be inclined to use AMD technology, but another angle is that “Graphics by AMD” or “Radeon Graphics” would sound good from a marketing perspective and allow MediaTek to differentiate its products in a saturated market.
AMD must face claims that it committed securities fraud by hiding problems with the bungled 2011 launch of Llano that eventually led to a $100 million write-down, a US court has decided.
According to Techeye US District Judge Yvonne Gonzales Rogers said plaintiffs had a case that AMD officials misled them by stating in the spring of 2011 and will have to face a full trial.
The lawsuit was over the Llano chip, which AMD had claimed was “the most impressive processor in history.”
AMD originally said that the product launch would happen in the fourth quarter of 2010, sales of the Llano were delayed because of problems at the company’s chip manufacturing plant.
The then Chief Financial Officer Thomas Seifert told analysts on an April 2011 conference call that problems with chip production for the Llano were in the past, and that the company would have ample product for a launch in the second quarter.
Press officers for AMD continued to insist that there were no problems with supply, concealing the fact that it was only shipping Llanos to top-tier computer manufacturers because it did not have enough chips.
By the time AMD ramped up Llano shipments in late 2011, no one wanted them any more, leading to an inventory glut.
AMD disclosed in October 2012 that it was writing down $100 million of Llano inventory as not shiftable.
Shares fell nearly 74 percent from a peak of $8.35 in March 2012 to a low of $2.18 in October 2012 when the market learned the extent of the problems with the Llano launch.
Intel announced that it is now shipping the Bay Trail system on a chip (SoC) successor codenamed Braswell to OEM partners.
Announced almost exactly a year ago at Intel’s Developer Forum in Beijing, Braswell is a more powerful version of Bay Trail running on the 14nm fab process, designed to power low-cost devices like Chromebooks and budget PCs.
The chip maker said that devices will hit the market sometime in late summer or autumn.
“We expect Braswell-based systems to be available in the market for the back to school 2015 selling season,” an Intel representative told The INQUIRER. “Specific dates and options will be announced by our OEM partners.”
That’s all Intel will give us for now, but we were told that full details regarding the upcoming chip will be revealed at IDF in Shenzhen next week.
Braswell was expected to arrive at the end of 2014 when it was originally unveiled last year.
Kirk Skaugen, general manager of Intel’s PC Client group, said that it will replace Bay Trail as part of the Atom line, and will feature in over 20 Chromebook designs.
“Last year, we had only four designs on Chrome. Today I can announce that we will have over 20 designs on Chrome,” said Skaugen at the time.
Intel recently announced another 14nm chip, the Atom x range, previously codenamed Cherry Trail, although this will be focused on tablets rather than the value PC market segment and Chromebooks like Braswell.
In terms of power, Braswell is likely to fit snuggly above the Atom x5 and x7 Cherry Trail SoCs and beneath the firm’s recently announced 5th-generation Core products, previously codenamed Broadwell.
Unveiled at Mobile World Congress earlier this year, Intel’s Atom x5 and x7 chips, previously codenamed Cherry Trail, are also updates to the previous Bay Trail Atom line-up, being the first Intel Atom SoCs on 14nm.
These higher-powered SoCs are designed to bring improved 3D performance to mainstream premium handheld devices running full versions of Windows and Android, such as 7in to 10.1in tablets and 2-in-1 hybrid laptops priced at around $119 to $499.
For example, Microsoft quietly announced on Tuesday that the upcoming Surface 3 tablet-laptop hybrid will be powered by an Intel Atom x7. The device is priced at$500.
Intel has launched Intel N3000 series systems on a chip (SoCs), which will kill off Bay Trail-M and Bay Trail-D SoCs on the desktop and mobile PCs.
CPU World also has spotted some other chips which have been revealed to the world.
Intel has also launched desktop and mobile Core i3 and Pentium microprocessors. New mobile models are Pentium 3825U, Core i3-5015U and i3-5020U. These ones are based on Broadwell 14nm.
Core i3-5015U and i3-5020U are dual-cores with Hyper-Threading technology, HD 5500 graphics and ultra low 15 Watt TDP. The processors run at 2.1 GHz and 2.2 GHz. This is 100 MHz higher than the i3-5005U and i3-5010U models, that were launched three months ago.
The i3-5015U and i3-5020U chips offer a 50 MHz higher graphics boost. Official prices of these SKUs are $275 and $281.
The Pentium 3825U incorporates a couple of enhancements on the older Pentium 3805U. It supports Hyper-Threading that allows it to process twice as many threads. It also has base and maximum graphics frequencies increased to 300 MHz and 850 MHz.
The 3805U and 3825U operate at 1.9 GHz and have 2 MB L2 cache. The 3825U processor is rated at 15 Watt TDP, and priced at $161.
AMD has been working closely with Microsoft on the upcoming DirectX 12 to create something which it calls “Asynchronous Shaders,” which are more efficient way of handling task queues.
In DirectX 11 synchronous task scheduling is handled with multi-threaded graphics with pre-emption and prioritization.
GPU’s shaders do the drawing of the image, computing of the game physics, post-processing and more, and they do this by being assigned various tasks. These are handled through the command stream which is generated through merging individual command queues.
However the queue has empty bits because they are not generated in order in multi-threaded graphics. This means the shaders don’t reach their full potential and get stuck in dead end jobs, with ugly demanding partners who demand huge sacrifices for their screaming kids..
DirectX 12 enters
In DirectX 12, Asynchronous Shaders provide asynchronous multi-threaded graphics with pre-emption and prioritization. The Asynchronous Compute Engines (ACE) on AMD’s GCN-based GPUs will interleave the tasks, filling the gaps in one queue with tasks from another.
Despite that, it can still move the main command queue to the side to let priority tasks pass by when necessary. It probably goes without saying that this leads to a performance gain.
AMD’s GCN GPUs, each ACE can handle up to eight queues, with each one looking after its shaders. Basic GPUs will have just two ACEs, while more elaborate GPUs carry eight.
AMD demonstrations show that it does not cost much in frame rate to run Asynchronous Shaders and post-processing but it will improve performance.
All the increased parallelism and will ensure that more frames make their way to the screen even faster, which can be especially interesting for purposes such as VR.
Intel has released details of its next -generation Xeon Phi processor and it is starting to look like Intel is gunning for a chunk of Nvidia’s GPU market.
According to a briefing from Avinash Sodani, Knights Landing Chief Architect at Intel, a product update by Hugo Saleh, Marketing Director of Intel’s Technical Computing Group, an interactive technical Q&A and a lab demo of a Knights Landing system running on an Intel reference-design system, Nvidia could be Intel’s target.
Knights Landing and prior Phi products are leagues apart and more flexible for a wider range of uses. Unlike more specialized processors, Intel describes Knights Landing as taking a “holistic approach” to new breakthrough applications.
The current generation Phi design, which operates as a coprocessor, Knights Landing incorporates x86 cores and can directly boot and run standard operating systems and application code without recompilation.
The test system had socketed CPU and memory modules was running a stock Linux distribution. A modified version of the Atom Silvermont x86 cores formed a Knights Landing ’tile’ which was the chip’s basic design unit consisting of dual x86 and vector execution units alongside cache memory and intra-tile mesh communication circuitry.
Each multi-chip package includes a processor with 30 or more tiles and eight high-speed memory chips.
Intel said the on-package memory, totaling 16GB, is made by Micron with custom I/O circuitry and might be a variant of Micron’s announced, but not yet shipping Hybrid Memory Cube.
The high-speed memory is similar to the DDR5 devices used on GPUs like Nvidia’s Tesla.
It looks like Intel saw that Nvidia was making great leaps into the high performance arena with its GPU and thought “I’ll be having some of that.”
The internals of a GPU and Xeon Phi are different, but share common ideas.
Nvidia has a big head start. It has already announced the price and availability of a Titan X development box designed for researchers exploring GPU applications to deep learning. Intel has not done that yet for Knights Landing systems.
But Phi is also a hybrid that includes dozens of full-fledged 64-bit x86 cores. This could make it better at some parallelizable application categories that use vector calculations.