Subscribe to:

Subscribe to :: TheGuruReview.net ::

ARM Appears To Be On The Rise

April 23, 2015 by Michael  
Filed under Computing

Chip designer ARM reported a 36 per cent rise in first-quarter net profit amid strong demand for its technology.

The British company said that expects 2015 revenue to meet the expectations of the cocaine nose jobs of Wall Street.

ARM recorded net profit of $126.7 million for the three months to March 31 and revenue rose 22 percent.

Shares in ARM, which makes money by licensing its designs to chip makers, then collecting royalty revenue when the chips ship, were up by more than 5 per cent on the back of the news.

Processor-royalty revenue in dollar terms, a much-watched figure, rose 31 per cent on the year, the company said, adding that it has signed 30 processor licenses for a broad range of applications.

ARM CEO Simon Segars said: As the world becomes more digital and more connected, we continue to see an increase in the demand for ARM’s smart and energy-efficient technology, which is driving both our licensing and royalty revenues.@

Processor-licensing revenue was down 2 per cent in the quarter, which was in line with expectations following strong growth previously. Chief Financial Officer Tim Score told journalists he expects it to grow in future quarters.

Aside from smartphones and tablets, ARM said it is also seeing demand for its processors to be used for servers and networking and for the “Internet of Things”, a term used for the growing tendency for more items to be wirelessly connected.

ARM expects to benefit from the growth of the Internet of Things in areas such as health and in cars, Score said.

Courtesy-Fud

 

ARM Buys Into Bluetooth For IoT

April 21, 2015 by Michael  
Filed under Computing

ARM has announced the acquisition of two Bluetooth companies in a bid to expand its presence in the Internet of Things (IoT) arena, and has created a new portfolio dubbed ARM Cordio in the process.

The UK semiconductor designer has picked up Wicentric, a Bluetooth smart stack and profile provider, and Sunrise Micro Devices (SMD), a provider of sub-one volt Bluetooth radio intellectual property (IP).

Wicentric is a privately held company that focuses on the development of low-power wireless products. These include Bluetooth protocol stack and profiles for creating interoperable smart products, and the link layer for silicon integration.

SMD is also privately held and provides radio IP solutions including a pre-qualified, self-contained radio block and related firmware to simplify radio deployment.

“Central to all SMD radios is native sub-one volt operation,” explained ARM in justifying its acquirement. “Operating below one volt enables the radio to run much longer on batteries or harvested energy.”

Terms of the agreements have not been disclosed, but ARM said that both companies’ IP will be combined to form the ARM Cordio portfolio.

This will integrate with the firm’s existing processor and physical IP targeting markets that require low-power wireless communications in the IoT space. The portfolio is available now for immediate licensing.

ARM is pushing its stance in the IoT market in a bid to monopolise on what is essentially the next big thing in tech before it becomes ubiquitous.

For instance, ARM joined forces with IBM in February to launch its mbed Device Platform as a starter kit with cloud support, offering developer tools with cloud-based analytics.

The mbed tool was announced last year and is primarily an operating system built around open standards to “bring internet protocols, security and standards-based manageability into one integrated tool” and make IoT deployment faster and easier and thus speed up the creation of IoT-powered devices.

Launching the mbed IoT Starter Kit Ethernet Edition with IBM means that the company can channel data from internet-connected devices directly into IBM’s Bluemix cloud platform.

The IoT Starter Kit consists of an ARM mbed-enabled development board from Freescale, powered by an ARM Cortex-M4-based processor, together with a sensor IO application shield.

Courtesy-TheInq

Midiatek Developing Two SoC’s for Tablets

April 17, 2015 by Michael  
Filed under Computing

MediaTek is working on two new tablet SoCs and one of them is rumored to be a $5 design.

The MT8735 looks like a tablet version of Mediatek’s smartphone SoCs based on ARM’s Cortex-A53 core. The chip can also handle LTE (FDD and TDD), along with 3G and dual-band WiFi. This means it should end up in affordable data-enabled tablets. There’s no word on the clocks or GPU.

The MT8163 is supposed to be the company’s entry-level tablet part. Priced at around $5, the chip does not appear to feature a modem – it only has WiFi and Bluetooth on board. GPS is still there, but that’s about it.

Once again, details are sketchy so we don’t know much about performance. However, this is an entry-level part, so we don’t expect miracles. It will have to slug it out with Alwinner’s $5 tablet SoC, which was announced a couple of months ago

According to a slide published by Mobile Dad, the MT8753 will be available later this month, but we have no timeframe for the MT8163.

But there’s nothing to see here as far as Torvalds is concerned. It’s just another day in the office. And all this in “Back To The Future II” year, as well.

Meanwhile under the bonnet, the community are already slaving away on Linux 4.1 which is expected to be a far more extensive release, with 100 code changes already committed within hours of Torvalds announcement of 4.0.

But there is already some discord in the ranks, with concerns that some of the changes to 4.1 will be damaging to the x86 compatibility of the kernel. But let’s let them sort that out amongst themselves.

After all, an anti-troll dispute resolution code was recently added to the Linux kernel in an effort to stop some of the more outspoken trolling that takes place, not least from Torvalds himself, according to some members of the community.

Courtesy-Fud

 

Will Moore’s Law Become More Important In The Next Twenty Years?

April 15, 2015 by Michael  
Filed under Computing

Moore’s Law will be more relevant in the 20 years to come than it was in the past 50 as the Internet of Things (IoT) creeps into our lives, Intel has predicted.

The chip maker is marking the upcoming 50th anniversary of Moore’s Law on 19 April by asserting that the best is yet to come, and that the law will become more relevant in the next two decades as everyday objects become smaller, smarter and connected.

Moore’s Law has long been touted as responsible for most of the advances in the digital age, from personal computers to supercomputers, despite Intel admitting in the past that it wasn’t enough.

Named after Gordon Moore, co-founder of Intel and Fairchild Semiconductor, Moore’s Law is the observation that the number of transistors in a dense integrated circuit will double approximately every two years.

Moore wrote a paper in 1965 describing a doubling every year in the number of components per integrated circuit. He revised the forecast in 1975, doubling the time to two years, and his prediction has proved accurate.

The law now is used in the semiconductor industry to guide long-term planning and to set targets for research and development.

Many digital electronic devices and manufacturing developments are strongly linked to Moore’s Law, whether it’s microprocessor prices, memory capacity or sensors, all improving at roughly the same rate.

More recently, Intel announced the development of 3D NAND memory, which the company said was guided by Moore’s Law.

Intel senior fellow Mark Bohr said on a recent press call that, while Moore’s Law has been going strong for 50 years, he doesn’t see it slowing down, adding that Moore himself didn’t realise it would hold true for 50 years. Rivals such as AMD have also had their doubts.

“[Moore] thought it would push electronics into new spaces but didn’t realise how profound this would be, for example, the coming of the internet,” said Bohr.

“If you’re 20-something [the law] might seem somewhat remote and irrelevant to you, but it will be more relevant in the next 20 years than it was in the past 50, and may even dwarf this importance.

“We can see about 10 years ahead, so our research group has identified some promising options [for 7nm and 5nm] not yet fully developed, but we think we can continue Moore’s Law for at least another 10 years.”

Intel believes that upcoming tech will be so commonplace that it won’t even be a ‘thing’ anymore. It will “disappear” into all the places we inhabit and into clothing, into ingestible devices that improve our health, for example, and “it will just become part of our surroundings” without us even noticing it.

“We are moving to the last squares in the chess board, shrinking tech and making it more power efficient meaning it can go into everything around us,” said Bohr.

The Intel fellow describes the law as a positive move forward, but he also believes that we need to have a hard think about where we want to place it once products become smart as they can become targets for digital attacks.

“Once you put intelligence in every object round you, the digital becomes physical. [For example] if your toaster becomes connected and gets a virus it’s an issue, but not so important as if your car does,” he said.

“We have to think how we secure these endpoints and make sure security and privacy are considered upfront and built into everything we deploy.”

Bohr explained that continuing Moore’s Law isn’t just a matter of making chips smaller, as the technology industry has continually to innovate device structures to ensure that it continues.

“Moore’s Law is exponential and you haven’t seen anything yet. The best is yet to come. I’m glad to hand off to the next generation entering the workforce; to create new exciting experiences, products and services to affect the lives of billions of people on the planet,” added Bohr.

“Moore’s Law is the North Star guiding Intel. It is the driving force for the people working at Intel to continue the path of Gordon’s vision, and will help enable emerging generations of inventors, entrepreneurs and leaders to re-imagine the future.”

Courtesy-TheInq

Global Foundaries Sheds Light On 14nm Game Plan

April 10, 2015 by Michael  
Filed under Computing

Global Foundries clarified details of the ramp up of 14nm chip production at fab 8 manufacturing facility in New York.

It has apparently  has taped out multiple 14nm designs and is tweaking its equipment using a lead product at the moment. The company is on track to start high-volume shipments of 14nm chips this year.

Jason Gorss, a spokesman for Global Foundries said that the outfit’s 14nm FinFET technology is maturing and on schedule at our Fab 8 facility in Malta, New York,.

“The early version (14LPE) is qualified in our fab and our lead product is yielding in double digits. Since 2014, we have taped multiple products and testchips and are seeing rapid progress, in yield and maturity, for volume shipments in 2015.”

The comment follows a statement from Mubadala Development last week week which claimed htat Global Foundries had begun ramping manufacturing of 14nm chips for customers.

Mubadala, which owns GlobalFoundries, did not provide any details. Even though production is currently not in high volume, it is clear that GlobalFoundries ships certain chips to clients.

It is not clear what 14nm chips GlobalFounfries produces at present, but it is highly likely that the company makes Samsung Exynos 7420 application processors for its process tech partner.

Another early partner of GlobalFoundries with its 14nm FinFET production could be Apple, which is expected to use Samsung’s 14nm process tech to make its upcoming A9 system-on-chip.

Global Foundries licensed Samsung’s 14LPE (low-power early) and 14LPP (low-power plus) process technologies last year.

This process uses FinFET transistors and rely on back-end-of-line (BEOL) interconnects of 20nm manufacturing technology. The14nm FinFET transistors allow a performance boost for chips by 20 per cent at the same power or cut power consumption by 35 per cent without decreasing performance or complexity.

Courtesy-Fud

Was Crytek Saved By Amazon?

April 9, 2015 by Michael  
Filed under Gaming

The deal that helped Crytek recover from its recent financial difficulties was Amazon, according to a report from Kotaku.

The online retail giant signed a licensing deal for CryEngine, Crytek’s proprietary game engine. Sources within the company put the deal’s value at between $50 million and $70 million, and suggested that Amazon may be using it as the bedrock for a proprietary engine of its own.

However Amazon uses the technology, though, the importance of the deal for Crytek cannot be overstated. Last year, during the summer, it became apparent that all was not well at the German developer. Employees hadn’t been fully paid in months, leading to an alleged staff walkout in its UK office, where a sequel to Homefront was in development. Koch Media acquired the Homefront IP and its team shortly after.

When the company’s management eventually addressed the rumors, it had already secured the financing necessary to take the company forward. No details of the deal were offered, but it’s very likely that Crytek got the money it needed from Amazon.

We have contacted Crytek to confirm the details, but it certainly fits with the perception that Amazon could emerge as a major creator of game content. It has snapped up some elite talent to do just that, it acquired Twitch for a huge sum of money, and it has been very open about where it plans to fit into the overall market.

Courtesy-GI.biz

 

Is MediaTek Working On A Secret GPU?

April 8, 2015 by Michael  
Filed under Computing

An upcoming MediaTek SoC has been spotted in GFXbench and this tablet-oriented chip has created a lot of speculation thanks to the choice of GPU.

The Cortex-A53 based MediaTek MT8163 was apparently tested on a dev board with 2GB of RAM and the benchmark failed to identify the GPU. GFXbench identified the GPU as a part coming from “MediaTek Inc. Sapphire-lit”.

Spinning up the rumour mill

This is where the speculation starts, as many punters associated the GPU with AMD, and the presence of the word “Sapphire” also prompted some to conclude that AMD’s leading GPU add-in-board partner had something to do with it.

The Sapphire word association doesn’t look like anything other than clutching at straws, because it’s highly unlikely that an AIB would have much to do with the process of licensing AMD IP for mobile graphics.

However, this does not necessarily mean that we are not looking at a GPU that doesn’t have anything to do with AMD. The fact that MediaTek’s name is on it is perhaps more important, because it suggests an in-house design. Whether or not the part is indeed an in-house design, and whether it features some AMD technology, is still up for debate.

Why would MediaTek need AMD to begin with?

MediaTek relies on ARM Mali GPUs, although it uses Imagination GPUs on some designs. So where does AMD fit into all this?

As we reported last month, the companies have been cooperating on the SoC graphics front for a while, but they are tight lipped about the scope of their cooperation.

MediaTek is a supporter of HSA and a founding member of the HSA Foundation, but this doesn’t prove much, either, since the list of founding members includes ARM, Imagination, Texas Instruments, Samsung and Qualcomm.

Using AMD technology on SoCs would have to be a long-term strategy, built around the concept of using AMD IP to boost overall SoC performance rather than just GPU performance. This is why we do not expect to see the fruits of their cooperation in commercial products anytime soon.

Improved compute performance is one of the reasons MediaTek may be inclined to use AMD technology, but another angle is that “Graphics by AMD” or “Radeon Graphics” would sound good from a marketing perspective and allow MediaTek to differentiate its products in a saturated market.

Courtesy-Fud

Did AMD Commit Fraud?

April 6, 2015 by Michael  
Filed under Computing

AMD must face claims that it committed securities fraud by hiding problems with the bungled 2011 launch of Llano that eventually led to a $100 million write-down, a US court has decided.

According to Techeye US District Judge Yvonne Gonzales Rogers said plaintiffs had a case that AMD officials misled them by stating in the spring of 2011 and will have to face a full trial.

The lawsuit was over the Llano chip, which AMD had claimed was “the most impressive processor in history.”

AMD originally said that the product launch would happen in the fourth quarter of 2010, sales of the Llano were delayed because of problems at the company’s chip manufacturing plant.

The then Chief Financial Officer Thomas Seifert told analysts on an April 2011 conference call that problems with chip production for the Llano were in the past, and that the company would have ample product for a launch in the second quarter.

Press officers for AMD continued to insist that there were no problems with supply, concealing the fact that it was only shipping Llanos to top-tier computer manufacturers because it did not have enough chips.

By the time AMD ramped up Llano shipments in late 2011, no one wanted them any more, leading to an inventory glut.
AMD disclosed in October 2012 that it was writing down $100 million of Llano inventory as not shiftable.

Shares fell nearly 74 percent from a peak of $8.35 in March 2012 to a low of $2.18 in October 2012 when the market learned the extent of the problems with the Llano launch.

Courtesy-Fud

AMD Working On Asynchronous Shaders

April 2, 2015 by Michael  
Filed under Computing

AMD has been working closely with Microsoft on the upcoming DirectX 12 to create something which it calls “Asynchronous Shaders,” which are more efficient way of handling task queues.

In DirectX 11 synchronous task scheduling is handled with multi-threaded graphics with pre-emption and prioritization.

GPU’s shaders do the drawing of the image, computing of the game physics, post-processing and more, and they do this by being assigned various tasks. These are handled through the command stream which is generated through merging individual command queues.

However the queue has empty bits because they are not generated in order in multi-threaded graphics. This means the shaders don’t reach their full potential and get stuck in dead end jobs, with ugly demanding partners who demand huge sacrifices for their screaming kids..

DirectX 12 enters

In DirectX 12, Asynchronous Shaders provide asynchronous multi-threaded graphics with pre-emption and prioritization. The Asynchronous Compute Engines (ACE) on AMD’s GCN-based GPUs will interleave the tasks, filling the gaps in one queue with tasks from another.

Despite that, it can still move the main command queue to the side to let priority tasks pass by when necessary. It probably goes without saying that this leads to a performance gain.

AMD’s GCN GPUs, each ACE can handle up to eight queues, with each one looking after its shaders. Basic GPUs will have just two ACEs, while more elaborate GPUs carry eight.

AMD demonstrations show that it does not cost much in frame rate to run Asynchronous Shaders and post-processing but it will improve performance.

All the increased parallelism and will ensure that more frames make their way to the screen even faster, which can be especially interesting for purposes such as VR.

Courtesy-Fud

 

Toshiba And SanDisk To Launch 48-Layer 3D Flash Chip

March 31, 2015 by Michael  
Filed under Computing

Toshiba has announced the world’s first 48-layer Bit Cost Scalable (BiCS) flash memory chip.

The BiCS is a two-bit-per-cell, 128Gb (16GB) device with a 3D-stacked cell structure flash that improves density and significantly reduces the overall size of the chip.

Toshiba is already using 15nm dies so, despite the layering, the finished product will be competitively thin.

24 hours after the first announcement, SanDisk made one of its own regarding the announcement. The two companies share a fabrication plant and usually make such announcements in close succession.

“We are very pleased to announce our second-generation 3D NAND, which is a 48-layer architecture developed with our partner Toshiba,” said Dr Siva Sivaram, executive vice president of memory technology at SanDisk.

“We used our first generation 3D NAND technology as a learning vehicle, enabling us to develop our commercial second-generation 3D NAND, which we believe will deliver compelling storage solutions for our customers.”

Samsung has been working on its own 3D stacked memory for some time and has released a number of iterations. Production began last May, following a 10-year research cycle.

Moving away from the more traditional design process, the BiCS uses a ‘charge trap’ which stops electrons leaking between layers, improving the reliability of the product.

The chips are aimed primarily at the solid state drive market, as the 48-layer stacking process is said to enhance reliability, write speed and read/write endurance. However, the BiCS is said to be adaptable to a number of other uses.

All storage manufacturers are facing a move to 3D because, unless you want your flash drives very long and flat, real estate on chips is getting more expensive per square inch than a bedsit in Soho.

Micron has been talking in terms of 3D NAND since an interview with The INQUIRER in 2013 and, after signing a deal with Intel, has predicted 10TB in a 2mm chip by the end of this year.

Production of the chips will roll out initially from Fab 5 before moving in early 2016 to Fab 2 at the firm’s Yokkaichi Operations plant.

This is in stark contrast to Intel, which mothballed its Fab 42 chip fabrication plant in Chandler, Arizona before it even opened, as the semiconductors for computers it was due to produce have fallen in demand by such a degree.

The Toshiba and Sandisk BiCS chips are available for sampling from today.

Courtesy-TheInq

 

Will Intel Challenge nVidia In The GPU Space?

March 30, 2015 by Michael  
Filed under Computing

Intel has released details of its next -generation Xeon Phi processor and it is starting to look like Intel is gunning for a chunk of Nvidia’s GPU market.

According to a briefing from Avinash Sodani, Knights Landing Chief Architect at Intel, a product update by Hugo Saleh, Marketing Director of Intel’s Technical Computing Group, an interactive technical Q&A and a lab demo of a Knights Landing system running on an Intel reference-design system, Nvidia could be Intel’s target.

Knights Landing and prior Phi products are leagues apart and more flexible for a wider range of uses. Unlike more specialized processors, Intel describes Knights Landing as taking a “holistic approach” to new breakthrough applications.

The current generation Phi design, which operates as a coprocessor, Knights Landing incorporates x86 cores and can directly boot and run standard operating systems and application code without recompilation.

The test system had socketed CPU and memory modules was running a stock Linux distribution. A modified version of the Atom Silvermont x86 cores formed a Knights Landing ’tile’ which was the chip’s basic design unit consisting of dual x86 and vector execution units alongside cache memory and intra-tile mesh communication circuitry.

Each multi-chip package includes a processor with 30 or more tiles and eight high-speed memory chips.

Intel said the on-package memory, totaling 16GB, is made by Micron with custom I/O circuitry and might be a variant of Micron’s announced, but not yet shipping Hybrid Memory Cube.

The high-speed memory is similar to the DDR5 devices used on GPUs like Nvidia’s Tesla.

It looks like Intel saw that Nvidia was making great leaps into the high performance arena with its GPU and thought “I’ll be having some of that.”

The internals of a GPU and Xeon Phi are different, but share common ideas.

Nvidia has a big head start. It has already announced the price and availability of a Titan X development box designed for researchers exploring GPU applications to deep learning. Intel has not done that yet for Knights Landing systems.

But Phi is also a hybrid that includes dozens of full-fledged 64-bit x86 cores. This could make it better at some parallelizable application categories that use vector calculations.

Courtesy-Fud

AMD Shows Plans For ARM Servers

March 27, 2015 by Michael  
Filed under Computing

Buried in AMD’s shareholders’ report, there was a some suprising detail about the outfit’s first ARM 64-bit server SoCs.

For those who came in late, they are supposed to be going on sale in the first half of 2015.

We know that the ARM Cortex-A57 architecture based SoC has been codenamed ‘Hierofalcon.’

AMD started sampling these Embedded R-series chips last year and is aiming to release the chipset in the first half of this year for embedded data center applications, communications infrastructure, and industrial solutions.

But it looks like the Hierofalcon SoC will include eight Cortex-A57 cores with 4MB L2 cache and will be manufactured on a 28nm process. It will support two 64-bit DDR3/4 memory channels with ECC up to 1866MHz and up to 128GB per CPU. Connectivity options will include two 10GbE KR, 8x SATA 3 6Gb/s, 8 lanes PCIe Gen 3, SPI, UART, and I2C interfaces. The chip will have a TDP between 15 to 30W.

The SOC ranges between a TDP of 15 – 30 W. The highly integrated SoC includes 10 Gb KR Ethernet and PCI-Express Gen 3 for high-speed network connectivity, making it ideal for control plane applications. The chip also features a dedicated security processor which enables AMD’s TrustZone technology for enhanced security. There’s also a dedicated cryptographic security co-processor on-board, aligning to the increased need for networked, secure systems.

Soon after Hierofalcon is out, AMD will be launching the SkyBridge platform that will feature interchangeable 64-bit ARM and x86 processors. Later in 2016, the company will be launching the K12 chip, its custom high performance 64-bit ARM core.

Courtesy-Fud

Can MediaTek Bring The Cortex-A72 To Market In The Fall?

March 23, 2015 by Michael  
Filed under Computing

MediaTek became the first chipmaker to publicly demo a SoC based on ARM’s latest Cortex-A72 CPU core, but the company’s upcoming chip still relies on the old 28nm manufacturing process.

We had a chance to see the upcoming MT8173 in action at the Mobile World Congress a couple of weeks ago.

The next step is to bring the new Cortex-A72 core to a new node and into mobiles. This is what MediaTek is planning to do by the end of the year.

Cortex-A72 smartphone parts coming in Q4

It should be noted that MediaTek’s 8000-series parts are designed for tablets, and the MT8173 is no exception. However, the new core will make its way to smartphone SoCs later this year, as part of the MT679x series.

According to Digitimes Research, MediaTek’s upcoming MT679x chips will utilize a combination of Cortex-A53 and Cortex-A57 cores. It is unclear whether MediaTek will use the planar 20nm node or 16nm FinFET for the new part.

By the looks of it, this chip will replace 32-bit MT6595, which is MediaTek’s most successful high performance part yet, with a few relatively big design wins, including Alcatel, Meizu, Lenovo and Zopo. The new chip will also supplement, and possibly replace the recently introduced MT6795, a 64-bit Cortex-A53/Cortex-A72 part used in the HTC Desire 826.

More questions than answers

Digitimes also claims the MT679x Cortex-A72 parts may be the first MediaTek products to benefit from AMD technology, but details are scarce. We can’t say whether or not the part will use AMD GPU technology, or some HSA voodoo magic. Earlier this month we learned that MediaTek is working with AMD and the latest report appears to confirm our scoop.

The other big question is the node. The chip should launch toward the end of the year, so we probably won’t see any devices prior to Q1 2016. While 28nm is still alive and kicking, by 2016 it will be off the table, at least in this market segment. Previous MediaTek roadmap leaks suggested that the company would transition to 20nm on select parts by the end of the year.

However, we are not entirely sure 20nm will cut it for high-end parts in 2016. Huawei has already moved to 16nm with its latest Kirin 930 SoC, Samsung stunned the world with the 14nm Exynos 7420, and Qualcomm’s upcoming Snapdragon 820 will be a FinFET part as well.

It is obvious that TSMC’s and Samsung’s 20nm nodes will not be used on most, if not all, high-end SoCs next year. With that in mind, it would be logical to expect MediaTek to use a FinFET node as well. On the other hand, depending on the cost, 20nm could still make sense for MediaTek – provided it ends up significantly cheaper than FinFET. While a 20nm chip wouldn’t deliver the same level of power efficiency and performance, with the right price it could find its way to more affordable mid-range devices, or flagships designed by smaller, value-oriented brands (especially those focusing on Chinese and Indian markets).

Courtesy-Fud

Will TSMC Win Apple’s A9 Business?

March 18, 2015 by Michael  
Filed under Computing

TSMC is reportedly getting the majority of Apple A9 orders, which would be a big coup for the company.

An Asian brokerage firm released a research note, claiming that disputes over the number of Apple A9 orders from TSMC and Samsung are “coming to an end.”

The unnamed brokerage firm said TSMC will gain more orders due to its superior yield-ramp and “manufacturing excellence in mass-production.”

This is not all, as the firm also claims TSMC managed to land orders for all Apple A9X chipsets, which will power next generation iPads. With the A9X, TSMC is expected to supply about 70 percent of all Apple A9-series chips, reports Focus Taiwan.

While Samsung managed to beat other mobile chipmakers (and TSMC), and roll out the first SoC manufactured on a FinFET node, TSMC is still in the game. The company is already churning out 16nm Kirin 930 processors for Huawei, and it’s about to get a sizable chunk of Apple’s business.

TSMC should have no trouble securing more customers for its 16FF process, which will be supplemented by the superior 16FF+ process soon. In addition, TSMC is almost certain to get a lot of business from Nvidia and AMD once their FinFET GPUs are ready.

Courtesy-Fud

MediaTek To Go With AMD GPUs

March 11, 2015 by Michael  
Filed under Computing

One of the hottest things we learned at the Mobile World Congress is that MediaTek is working with AMD on mobile SoC graphics.

This is a big deal for both companies, as this means that AMD is getting back into the ultra-low power graphics market, while MediaTek might finally get faster graphics and gain more appeal in the high end segment. The choice of ARM Mali or Imaginations Technologies GPUs is available for anyone, but as most of you know Qualcomm has its own in-house Adreno graphics, while Nvidia uses ultra-low power Maxwell GPUs for its latest SoCs.

Since Nvidia exited the mobile phone business, it is now a two horse race between the ever dominant Qualcomm and fast growing MediaTek. The fact that MediaTek will get AMD graphics just adds fuel to the fire.

We have heard that key AMD graphics people are in continuous contact with MediaTek and that they have been working on an SoC graphics solution for a while.

MediaTek can definitely benefit from faster graphics, as the recently pictured tablet SoC MT8173 powered by two Cortex-A72 clocked up to 2.4GHz and two Cortex-A53 has PowerVR GX6250 graphics (two clusters). The most popular tablet chip Appel’s A8X has PowerVR Series 6XT GXA6850 (octa-core) which should end up significantly faster, but at the same time significantly more expensive.

MediaTek MT6795 a 28nm eight-core with a 2.2GHz clock and PowerVR G6200 GPU at 700 MHz, which is 100 MHz faster than one we tested on the Meizu MX4, which was one of the fastest SoCs until Qualcomm’s Snapdragon 810 came out in late February.

AMD and MediaTek declined to comment this upcoming partnership, but our industry sources know that they both have been working on new graphics for future chips that will be announced at a later date. It’s cool to see that AMD will return to this market, especially as the company sold of its Imageon graphics back in 2009 – for a lousy $65 million to Qualcomm. Imageon by ATI was the foundation for Adreno graphics.

We have been reassured some 18 months ago by some AMD senior graphics people, that “AMD didn’t forget how to make good ultra-low power graphics” and we guess that this cooperation proves that.

Courtesy-Fud