Toshiba has announced the world’s first 48-layer Bit Cost Scalable (BiCS) flash memory chip.
The BiCS is a two-bit-per-cell, 128Gb (16GB) device with a 3D-stacked cell structure flash that improves density and significantly reduces the overall size of the chip.
Toshiba is already using 15nm dies so, despite the layering, the finished product will be competitively thin.
24 hours after the first announcement, SanDisk made one of its own regarding the announcement. The two companies share a fabrication plant and usually make such announcements in close succession.
“We are very pleased to announce our second-generation 3D NAND, which is a 48-layer architecture developed with our partner Toshiba,” said Dr Siva Sivaram, executive vice president of memory technology at SanDisk.
“We used our first generation 3D NAND technology as a learning vehicle, enabling us to develop our commercial second-generation 3D NAND, which we believe will deliver compelling storage solutions for our customers.”
Samsung has been working on its own 3D stacked memory for some time and has released a number of iterations. Production began last May, following a 10-year research cycle.
Moving away from the more traditional design process, the BiCS uses a ‘charge trap’ which stops electrons leaking between layers, improving the reliability of the product.
The chips are aimed primarily at the solid state drive market, as the 48-layer stacking process is said to enhance reliability, write speed and read/write endurance. However, the BiCS is said to be adaptable to a number of other uses.
All storage manufacturers are facing a move to 3D because, unless you want your flash drives very long and flat, real estate on chips is getting more expensive per square inch than a bedsit in Soho.
Micron has been talking in terms of 3D NAND since an interview with The INQUIRER in 2013 and, after signing a deal with Intel, has predicted 10TB in a 2mm chip by the end of this year.
Production of the chips will roll out initially from Fab 5 before moving in early 2016 to Fab 2 at the firm’s Yokkaichi Operations plant.
This is in stark contrast to Intel, which mothballed its Fab 42 chip fabrication plant in Chandler, Arizona before it even opened, as the semiconductors for computers it was due to produce have fallen in demand by such a degree.
The Toshiba and Sandisk BiCS chips are available for sampling from today.
Intel has released details of its next -generation Xeon Phi processor and it is starting to look like Intel is gunning for a chunk of Nvidia’s GPU market.
According to a briefing from Avinash Sodani, Knights Landing Chief Architect at Intel, a product update by Hugo Saleh, Marketing Director of Intel’s Technical Computing Group, an interactive technical Q&A and a lab demo of a Knights Landing system running on an Intel reference-design system, Nvidia could be Intel’s target.
Knights Landing and prior Phi products are leagues apart and more flexible for a wider range of uses. Unlike more specialized processors, Intel describes Knights Landing as taking a “holistic approach” to new breakthrough applications.
The current generation Phi design, which operates as a coprocessor, Knights Landing incorporates x86 cores and can directly boot and run standard operating systems and application code without recompilation.
The test system had socketed CPU and memory modules was running a stock Linux distribution. A modified version of the Atom Silvermont x86 cores formed a Knights Landing ’tile’ which was the chip’s basic design unit consisting of dual x86 and vector execution units alongside cache memory and intra-tile mesh communication circuitry.
Each multi-chip package includes a processor with 30 or more tiles and eight high-speed memory chips.
Intel said the on-package memory, totaling 16GB, is made by Micron with custom I/O circuitry and might be a variant of Micron’s announced, but not yet shipping Hybrid Memory Cube.
The high-speed memory is similar to the DDR5 devices used on GPUs like Nvidia’s Tesla.
It looks like Intel saw that Nvidia was making great leaps into the high performance arena with its GPU and thought “I’ll be having some of that.”
The internals of a GPU and Xeon Phi are different, but share common ideas.
Nvidia has a big head start. It has already announced the price and availability of a Titan X development box designed for researchers exploring GPU applications to deep learning. Intel has not done that yet for Knights Landing systems.
But Phi is also a hybrid that includes dozens of full-fledged 64-bit x86 cores. This could make it better at some parallelizable application categories that use vector calculations.
Buried in AMD’s shareholders’ report, there was a some suprising detail about the outfit’s first ARM 64-bit server SoCs.
For those who came in late, they are supposed to be going on sale in the first half of 2015.
We know that the ARM Cortex-A57 architecture based SoC has been codenamed ‘Hierofalcon.’
AMD started sampling these Embedded R-series chips last year and is aiming to release the chipset in the first half of this year for embedded data center applications, communications infrastructure, and industrial solutions.
But it looks like the Hierofalcon SoC will include eight Cortex-A57 cores with 4MB L2 cache and will be manufactured on a 28nm process. It will support two 64-bit DDR3/4 memory channels with ECC up to 1866MHz and up to 128GB per CPU. Connectivity options will include two 10GbE KR, 8x SATA 3 6Gb/s, 8 lanes PCIe Gen 3, SPI, UART, and I2C interfaces. The chip will have a TDP between 15 to 30W.
The SOC ranges between a TDP of 15 – 30 W. The highly integrated SoC includes 10 Gb KR Ethernet and PCI-Express Gen 3 for high-speed network connectivity, making it ideal for control plane applications. The chip also features a dedicated security processor which enables AMD’s TrustZone technology for enhanced security. There’s also a dedicated cryptographic security co-processor on-board, aligning to the increased need for networked, secure systems.
Soon after Hierofalcon is out, AMD will be launching the SkyBridge platform that will feature interchangeable 64-bit ARM and x86 processors. Later in 2016, the company will be launching the K12 chip, its custom high performance 64-bit ARM core.
MediaTek became the first chipmaker to publicly demo a SoC based on ARM’s latest Cortex-A72 CPU core, but the company’s upcoming chip still relies on the old 28nm manufacturing process.
We had a chance to see the upcoming MT8173 in action at the Mobile World Congress a couple of weeks ago.
The next step is to bring the new Cortex-A72 core to a new node and into mobiles. This is what MediaTek is planning to do by the end of the year.
Cortex-A72 smartphone parts coming in Q4
It should be noted that MediaTek’s 8000-series parts are designed for tablets, and the MT8173 is no exception. However, the new core will make its way to smartphone SoCs later this year, as part of the MT679x series.
According to Digitimes Research, MediaTek’s upcoming MT679x chips will utilize a combination of Cortex-A53 and Cortex-A57 cores. It is unclear whether MediaTek will use the planar 20nm node or 16nm FinFET for the new part.
By the looks of it, this chip will replace 32-bit MT6595, which is MediaTek’s most successful high performance part yet, with a few relatively big design wins, including Alcatel, Meizu, Lenovo and Zopo. The new chip will also supplement, and possibly replace the recently introduced MT6795, a 64-bit Cortex-A53/Cortex-A72 part used in the HTC Desire 826.
More questions than answers
Digitimes also claims the MT679x Cortex-A72 parts may be the first MediaTek products to benefit from AMD technology, but details are scarce. We can’t say whether or not the part will use AMD GPU technology, or some HSA voodoo magic. Earlier this month we learned that MediaTek is working with AMD and the latest report appears to confirm our scoop.
The other big question is the node. The chip should launch toward the end of the year, so we probably won’t see any devices prior to Q1 2016. While 28nm is still alive and kicking, by 2016 it will be off the table, at least in this market segment. Previous MediaTek roadmap leaks suggested that the company would transition to 20nm on select parts by the end of the year.
However, we are not entirely sure 20nm will cut it for high-end parts in 2016. Huawei has already moved to 16nm with its latest Kirin 930 SoC, Samsung stunned the world with the 14nm Exynos 7420, and Qualcomm’s upcoming Snapdragon 820 will be a FinFET part as well.
It is obvious that TSMC’s and Samsung’s 20nm nodes will not be used on most, if not all, high-end SoCs next year. With that in mind, it would be logical to expect MediaTek to use a FinFET node as well. On the other hand, depending on the cost, 20nm could still make sense for MediaTek – provided it ends up significantly cheaper than FinFET. While a 20nm chip wouldn’t deliver the same level of power efficiency and performance, with the right price it could find its way to more affordable mid-range devices, or flagships designed by smaller, value-oriented brands (especially those focusing on Chinese and Indian markets).
TSMC is reportedly getting the majority of Apple A9 orders, which would be a big coup for the company.
An Asian brokerage firm released a research note, claiming that disputes over the number of Apple A9 orders from TSMC and Samsung are “coming to an end.”
The unnamed brokerage firm said TSMC will gain more orders due to its superior yield-ramp and “manufacturing excellence in mass-production.”
This is not all, as the firm also claims TSMC managed to land orders for all Apple A9X chipsets, which will power next generation iPads. With the A9X, TSMC is expected to supply about 70 percent of all Apple A9-series chips, reports Focus Taiwan.
While Samsung managed to beat other mobile chipmakers (and TSMC), and roll out the first SoC manufactured on a FinFET node, TSMC is still in the game. The company is already churning out 16nm Kirin 930 processors for Huawei, and it’s about to get a sizable chunk of Apple’s business.
TSMC should have no trouble securing more customers for its 16FF process, which will be supplemented by the superior 16FF+ process soon. In addition, TSMC is almost certain to get a lot of business from Nvidia and AMD once their FinFET GPUs are ready.
One of the hottest things we learned at the Mobile World Congress is that MediaTek is working with AMD on mobile SoC graphics.
This is a big deal for both companies, as this means that AMD is getting back into the ultra-low power graphics market, while MediaTek might finally get faster graphics and gain more appeal in the high end segment. The choice of ARM Mali or Imaginations Technologies GPUs is available for anyone, but as most of you know Qualcomm has its own in-house Adreno graphics, while Nvidia uses ultra-low power Maxwell GPUs for its latest SoCs.
Since Nvidia exited the mobile phone business, it is now a two horse race between the ever dominant Qualcomm and fast growing MediaTek. The fact that MediaTek will get AMD graphics just adds fuel to the fire.
We have heard that key AMD graphics people are in continuous contact with MediaTek and that they have been working on an SoC graphics solution for a while.
MediaTek can definitely benefit from faster graphics, as the recently pictured tablet SoC MT8173 powered by two Cortex-A72 clocked up to 2.4GHz and two Cortex-A53 has PowerVR GX6250 graphics (two clusters). The most popular tablet chip Appel’s A8X has PowerVR Series 6XT GXA6850 (octa-core) which should end up significantly faster, but at the same time significantly more expensive.
MediaTek MT6795 a 28nm eight-core with a 2.2GHz clock and PowerVR G6200 GPU at 700 MHz, which is 100 MHz faster than one we tested on the Meizu MX4, which was one of the fastest SoCs until Qualcomm’s Snapdragon 810 came out in late February.
AMD and MediaTek declined to comment this upcoming partnership, but our industry sources know that they both have been working on new graphics for future chips that will be announced at a later date. It’s cool to see that AMD will return to this market, especially as the company sold of its Imageon graphics back in 2009 – for a lousy $65 million to Qualcomm. Imageon by ATI was the foundation for Adreno graphics.
We have been reassured some 18 months ago by some AMD senior graphics people, that “AMD didn’t forget how to make good ultra-low power graphics” and we guess that this cooperation proves that.
AMD Liquid VR is not a retail product – it is an initiative to develop and deliver the best Virtual Reality (VR) experience in the industry.
AMD Liquid VR was announced at the Game Developers Conference in San Francisco, and the company describes it is a “set of innovative technologies focused on enabling exceptional VR content development” for hardware based on AMD silicon.
Developers will soon get access to the LiquidVR SDK, which will help them address numerous issues associated with VR development.
Platform and software rather than hardware
If you were expecting to see a sexy AMD VR headset with a killer spec, the announcement may be disappointing. However, if you are a “what’s under the bonnet” kind of geek, there are a few interesting highlights.
AMD has put a lot of effort into minimising motion-to-photon latency, which should not only help improve the experience, but also keep you from experiencing motion sickness, or hurling over that new carpet that really ties the room together.
Headline features of LiquidVR SDK 1.0 include:
Async Shaders for smooth head-tracking enabling Hardware-Accelerated Time Warp, a technology that uses updated information on a user’s head position after a frame has been rendered and then warps the image to reflect the new viewpoint just before sending it to a VR headset, effectively minimizing latency between when a user turns their head and what appears on screen.
Affinity Multi-GPU for scalable rendering, a technology that allows multiple GPUs to work together to improve frame rates in VR applications by allowing them to assign work to run on specific GPUs. Each GPU renders the viewpoint from one eye, and then composites the outputs into a single stereo 3D image. With this technology, multi-GPU configurations become ideal for high performance VR rendering, delivering high frame rates for a smoother experience.
Latest data latch for smooth head-tracking, a programming mechanism that helps get head tracking data from the head-mounted display to the GPU as quickly as possible by binding data as close to real-time as possible, practically eliminating any API overhead and removing latency.
Direct-to-display for intuitively attaching VR headsets, to deliver a seamless plug-and-play virtual reality experience from an AMD Radeon™ graphics card to a connected VR headset, while enabling features such as booting directly to the display or using extended display features within Windows.
You can grab the full AMD LiquidVR presentation here. (pdf)
What’s next for LiquidVR?
It all depends on what you were expecting, and what the rest of the industry does. AMD hopes LiquidVR will be compatible with a broad range of VR devices. LiquidVR will allow hardware makers to implement AMD technology in their products with relative ease, enabling 100Hz refresh rates, the use of individual GPUs per each eye and so on.
To a certain extent, you can think of LiquidVR as FreeSync for VR kit.
Oculus CEO Brendan Irbe said achieving presence in a virtual world is one of the most important elements needed to deliver a good user experience.
He explained where AMD comes in:
“We’re excited to have AMD working with us on their part of the latency equation, introducing support for new features like asynchronous timewarp and late latching, and compatibility improvements that ensure that Oculus’ users have a great experience on AMD hardware.”
Raja Koduri, corporate vice president, Visual Computing, AMD, said content, comfort and compatibility are the cornerstones of AMD’s focus on VR.
AMD’s resident graphics guru said:
“With LiquidVR we’re collaborating with the ecosystem to unlock solutions to some of the toughest challenges in VR and giving the keys to developers of VR content so that they can bring exceptional new experiences to life.”
A picture is worth a thousand words, so here’s 3300 frames of AMD’s virtual reality vision.
It looks like the Mantle API developed by AMD is slowly reaching its end of its useful life.
Mantle has apparently served its purpose as a bridge between DirectX 11 and DirectX 12 and AMD is starting to tell new developers to focus their attention on DirectX and GLnext.
Raja Koduri, the Vice President of Visual and Perceptual Computing at AMD said in a blog post:
The Mantle SDK also remains available to partners who register in this co-development and evaluation program. However, if you are a developer interested in Mantle “1.0″ functionality, we suggest that you focus your attention on DirectX® 12 or GLnextGLnext.
This doesn’t mean a quick death for Mantle. AMD suggest it will support its partners and that there are still titles to come with support for Mantle. Battlefield Hardline is one of them and it’s a big one.
Back in November AMD announced a Mantle update, telling the world that there are four engines and 20+ launched or upcoming titles, and 10 developers publically announced their support for Mantle.
There are close to 100 registered developers in the Mantle beta program. The Frostbite 3 engine (Battlefield Hardline), CryEngine (Crysis series), Nitrous Engine (Star Citizen) and Asura Engine (Sniper elite) currently have support for Mantle. Some top games including Thief and Sid Meir’s Civilization Beyond Earth also support Mantle.
AMD will tell developers a bit more about Mantle at the Game Developers Conference 15 that starts today in San Francisco and will talk more about its definitions of an open platform. The company will also tackle on new capabilities beyond draw calls and it will remain there for the people who are already part of the Mantle program.
However, AMD suggests new partners should look the other way and focus on alternatives. When we spoke with Raja and a few other people from AMD over the last few quarters, we learned that Mantle was never supposed to take on DirectX 12. You should look at Mantle as AMD’s wish list, that’s what AMD wanted and needed before Microsoft was ready to introduce DirectX 12. Mantle as a low-level rending API and keep in in mind that it came almost two years before DirectX 12.
The Battlefield 4 Mantle patch came in February 2014 roughly a year ago and it showed a significant performance increase on supported hardware. Battlefield Hardline is the next big game to support Mantle and it comes in two weeks. CryEngine also supports Mantle, but we will have to wait and see if the support will ever translate into an actual game with Mantle support.
Spotted by GforGames site, in a GeekBench test results and running inside an unknown smartphone, MediaTek’s MT6795 managed to score 886 points in the single-core test and 4536 points in the multi-core test. These results were enough to put it neck to neck with the mighty Qualcomm Snapdragon 810 SoC tested in the LG G Flex 2, which scored 1144 points in the single-core and 4345 in the multi-core test. While it did outrun the MT6795 in the single-core test, the multi-core test was clearly not kind on the Snapdragon 810.
The unknown device was running on Android Lollipop OS and packed 3GB of RAM, which might gave the MT6795 an edge over the LG G Flex 2.
MediaTek’s octa-core MT6795 was announced last year and while we are yet to see some of the first design wins, recent rumors suggested that it could be powering Meizu’s MX5, HTC’s Desire A55 and some other high-end smartphones. The MediaTek MT6795 is a 64-bit octa-core SoC clocked at up to 2.2GHz, with four Cortex-A57 cores and four Cortex-A53 cores. It packs PowerVR G6200 graphics, supports LPDDR3 memory and can handle 2K displays at up to 120Hz.
As we are just a few days from Mobile World Congress (MWC) 2015 which will kick off in Barcelona on March 2nd, we are quite sure that we will see more info as well as more benchmarks as a single benchmark running on an unknown smartphone might not be the best representation of performance, it does show that MediaTek certainly has a good chip and can compete with Qualcomm and Samsung.
According to Toms Hardware one of the unexpected features of DirectX 12 is the ability to use Nvidia GPUs alongside AMD GPUs in multi-card configurations.
This is because DirectX 12 operates at a lower level than previous versions of the API it is able to treat all available video resources as one unit. Card model and brand makes no difference to a machine running DX12.
This could mean that the days of PC gamers having to decide between AMD or Nvidia could be over and they can pick their referred hardware from both companies and enjoy the best of both worlds. They will also be able to mix old and new cards.
However there might be a few problems with all this. Rather than worrying about your hardware optimization software developers will have to be on the ball to make sure their products work.
More hardware options means more potential configurations that games need to run on, and that could cause headaches for smaller studios.
It would appear that the world is rushing to Nvidia to buy its latest GPU at the expense of AMD.
According to the data, NVIDIA and AMD each took dramatic swings from Q4 of 2013 to Q4 of 2014 with Nvidia increasing its market share over AMD by 20 per cent and AMD’s market share has dropped from 35 per cent at the end of 2013 to just 24 per cent at the end of 2014.
Meanwhile, Nvidia has gonr from 64.9 per cent at the end of 2013 to 76 per cent at the end of 2014.
The report JPR’s AIB Report looks at computer add-in graphics boards, which carry discrete graphics for desktop PCs, workstations, servers, and other devices such as scientific instruments.
In all cases, AIBs represent the higher end of the graphics industry using discrete chips and private high-speed memory, as compared to the integrated GPUs in CPUs that share slower system memory.
On a year-to-year basis, total AIB shipments during the quarter fell by 17.52 per cent , which is more than desktop PCs, which fell by 0.72 percent .
However, in spite of the overall decline, somewhat due to tablets and embedded graphics, the PC gaming momentum continues to build and is the bright spot in the AIB market.
The overall PC desktop market increased quarter-to-quarter including double-attach-the adding of a second (or third) AIB to a system with integrated processor graphics-and to a lesser extent, dual AIBs in performance desktop machines using either AMD’s Crossfire or Nvidia’s SLI technology.
The attach rate of AIBs to desktop PCs declined from a high of 63 per cent in Q1 2008 to 36 per cent this quarter.
So in other words It is also clear that the Radeon R9 285 release didn’t have the impact AMD had hoped and NVIDIA’s Maxwell GPUs, the GeForce GTX 750 Ti, GTX 970 and GTX 980 have impacted the market even more than expected.
This is ironic because the GTX 970 has been getting a lot of negative press with the memory issue and AMD makes some good gear, has better pricing and a team of PR and marketing folks that are talented and aggressive.
Intel’s exascale computing efforts have received a boost with the extension of the company’s research collaboration with the Barcelona Supercomputing Center.
Begun in 2011 and now extended to September 2017, the Intel-BSC work is currently looking at scalability issues with parallel applications.
Karl Solchenbach, Intel’s director, Innovation Pathfinding Architecture Group in Europe said it was important to improve scalability of threaded applications on many core nodes through the OmpSs programming model.
The collaboration has developed a methodology to measure these effects separately. “An automatic tool not only provides a detailed analysis of performance inhibitors, but also it allows a projection to a higher number of nodes,” says Solchenbach.
BSC has been making HPC tools and given Intel an instrumentation package (Extrae), a performance data browser (Paraver), and a simulator (Dimemas) to play with.
Charlie Wuischpard, VP & GM High Performance Computing at Intel said that the Barcelona work is pretty big scale for Chipzilla.
“A major part of what we’re proposing going forward is work on many core architecture. Our roadmap is to continue to add more and more cores all the time.”
“Our Knights Landing product that is coming out will have 60 or more cores running at a slightly slower clock speed but give you vastly better performance,” he said.
Sony is expected to use more MediaTek application processors in upcoming Xperia smartphones.
According to Digitimes, the Japanese consumer electronics giant is planning to increase its reliance on MediaTek chips in entry-level and mid-range smartphones this year. There is still no word on high-end products, and it seems Qualcomm’s 800-series parts will continue to power Xperia flagships for the time being.
Sony is also working with a number of Taiwanese ODMs like Foxconn, FIH Mobile, Compal and Arima Communications. The company’s latest Xperia E4 smartphone was in fact outsourced to Arima.
As for Foxconn/FIH Mobile and Compal, they are said to be developing 4G models for Sony, which means they are supposed to cover the mid-range segment. Most of this new models are expected to be based on MediaTek’s new octa-core MT6752 processor, which packs 64-bit Cortex-A53 cores.
The affordable MT6752 has already found its way into a number of Chinese mid-range smartphones, as well big-brand devices like the HTC Desire 826 and Acer Liquid Jade S.
MediaTek is predicting that its revenues will decline by 10-18 per cent in the first quarter of 2015.
Estimates are about $1.44 billion which is not to be sneezed at but is still not that good.
Part of the problem is the Smartphone vendors’ transition from old to new products, seasonal factors as well as fewer working days due to the Lunar New Year holidays.
Company president Hsieh Ching-chiang. Gross margin will be 46-48 per cent in the first quarter.
MediaTek’s shipments for smartphones are set to top 450 million units in 2015, up about 29 per cent from the 350 million units shipped in 2014. Shipments for 4G LTE devices will reach 150 million units in 2015.
Apparently, the company is planning to expand in China’s LTE chip market in 2015. Strong shipments for LTE chips will contribute to the company’s revenue growth during the year.
MediaTek expects to post a double-digit revenue increase in 2015, according to Hsieh.
The computer security firm says it has discovered new spyware that infects iPhones, gathers large amounts of personal information and sends it to a remote server.
The spyware, called XAgent, is delivered via a phishing attack using a technique called island hopping. In that, the phones of friends and associates of the true target are first infected and then used to pass on the spyware link. It’s based on the assumption that the target is more likely to click on links from people they know than from strangers.
Once installed, XAgent will collect text messages, contact lists, pictures, geo-location data, a list of installed apps, a list of any software processes that are running and the WiFi status of the device. That information is packaged and sent to a server operated by the hackers. XAgent is also capable of switching on the phone’s microphone and recording everything it hears.
XAgent runs on both iOS 7 and iOS 8 phones, whether they’ve been jailbroken or not. It is most dangerous on iOS 7 since it hides its icon to evade detection.
On iOS 8 it isn’t hidden and needs to be manually launched each time the phone is rebooted — a process that would require the user to purposely reinfect their phone each time. For that reason, Trend Micro believes the spyware was written before iOS8 was launched last year.
While close to three quarters of Apple mobile devices are using iOS 8, a quarter are still running iOS7, according to data published by Apple this week.
“We’ve been monitoring the actors behind this for quite some time,” said Jon Clay, senior manager of Global Threat communication at Trend Micro, in a phone interview. “The criminals have introduced [the iOS app] as part of their campaign to move further into the [targeted] organization, using this rather than PC malware.”
While the identity of the hackers isn’t known, Trend Micro says it believes those behind what it calls “Operation Pawn Storm” to be a pro-Russian group. Past targets have included military organizations, defense contractors, embassies and media groups.