MediaTek has announced two more Helio X20 series products – a Helio X27 and an X23 and as you can figure out from the names; Helio X27 is faster than the X25 while X23 is a bit slower.
Helio X25 was the fastest deca-core 20nm SoC from MediaTek with three cluster designs and this SoC ended up in quite a few prominent China higher end phones including a few Meizu devices. But it looks like customers wanted a bit faster camera, SoC and GPU performance for its late 2016 early 2017 phones, the ones that will launch before the Helio X30 comes to market.
Jeffrey Ju, Executive Vice President and Co-Chief Operating Officer at MediaTek said: “The MediaTek Helio platform fulfills the diverse needs of device makers. Based on the success of MediaTek Helio X20 and X25, we are introducing the upgraded MediaTek Helio X23 and X27. The new SoCs support premium dual camera photography and provide best in-class performance and power consumption,”
The Helio X25 has two Cortex A73 cores clocked at 2.5 GHz, four Cortex A53 clocked at 2.00 GHz and last four Cortex A53 clocked at 1.55GHz. The Mali GT880 graphics is clocked at 850 MHz.
The Helio X20 has two Cortex A73 cores clocked at 2.1 GHz, four Cortex A53 clocked at 1.85 GHz and last four Cortex A53 clocked at 1.4GHz. The Mali GT880 graphics is clocked at 780 MHz.
The newcomer, Helio X27, has two Cortex A73 cores clocked at 2.6 GHz, four Cortex A53 clocked at 2.00 GHz and the last four Cortex A53 clocked at 1.6 GHz. The Mali GT880 graphics is clocked at 875 MHz. The rest of the specification is identical to the Helio X25.
The Helio X23 has two Cortex A73 cores clocked at 2.3 GHz, four Cortex A53 clocked at 1.85 GHz and the last four Cortex A53 clocked at 1.4GHz. The Mali GT880 graphics is clocked at 780 MHz. As you can see, this is just a slightly faster version of Helio X20 and it sits just below Helio X25 with its specs.
Thanks to MediaTek-engineered advancements in the CPU/GPU heterogeneous computing scheduling algorithm, both products deliver more than a 20 percent overall processing improvement and significant increases in web browsing and application launching speeds. This definitely sounds promising but you should bear in mind that MediaTek had enough time to optimize these designs of the new and updated SoCs.
Phones based on the Helio X27 and X23 will be available soon.
While waiting for Zen is remarkably like waiting for Godot, we have just been told that AMD will be holding a sneak peek of its high-performance Zen CPU on 13 December.
The preview will be streamed at 1 p.m. PST on December 13. You can sign up at AMD’s website to have a shifty. The host of the event will be the video gamer hack, Geoff Keighley and it will be called “Watch New Horizon.”
According to the email the event will be an “ exclusive advance preview of our new ‘Zen’ CPU ahead of its 2017 Q1 launch”.
“See eSports & Evil Geniuses legend PPD put ‘Zen’ through its paces. There’ll be appearances from special guests and giveaways. This is the first time the public will be able to try it themselves and see its capabilities. If you’re serious about gaming, this is an event you do not want to miss.”
What we are expecting is that AMD will show off the quad-core version of Zen. AMD will have four Zen-based CPUs in the “Summit Ridge” family launching early next year.
The top end will probably include two eight-core chips with Simultaneous Multi-Threading, and an SR5 with six cores, and a quad-core SR3.
What we are curious about is if the pricing rumors are correct. The highest-end 8-core will be $500, while a second, slower eight-core chip could be as low as $350. This will really scare the bejesus out of Intel as it is promising better performance for half the price.
Other rumors say that the six-core SR5 will hit the shops for $250 and the quad-core SR3 will be $150. Intel gear will set you back $320 for its quad-core Core i7-6700K chip, and the cheapest six-core costs $380.
According to industry sources, TSMC is planning to introduce a 12 nanometer half-node process to enhance competition with 28nm and lower process nodes that have been adopted over the past few years.
The chip manufacturer’s 12nm process node will join its existing 16nm process portfolio as a smaller option in order to give it a competitive advantage against Samsung and GlobalFoundries. It is expected to offer improved leakage characteristics at a lower cost than its 16nm lineup. TSMC currently offers three variants of its 16nm FinFET process designed both for high-performance devices, as well as for ultra-low power situations requiring less than 0.6 volts.
Back in September, GlobalFoundries was the first to announce a 12nm process using Fully Depleted Silicon-On-Insulator (FD-SOI) planar technology. The foundry claims that 12FDX can deliver “15 percent more performance over current FinFET technologies” with “50 percent lower power consumption,” at a cost lower than existing 16nm FinFET devices.
TSMC currently supplies 16nm chips to a number of American, Chinese and Taiwanese companies including Apple, Nvidia, Xilinx, Spreadtrum and MediaTek, while GlobalFoundries provides chips using 14nm FinFET technology for AMD’s Polaris graphics cards and upcoming Zen processors. Meanwhile, Samsung provides 14nm LPP technology to Qualcomm for its Snapdragon 820 series and for use in its own mobile device lineup.
Although TSMC’s 12nm process was originally planned to be introduced as a fourth-generation 16nm optimization, it will now be introduced as an independent process technology instead. Three of the company’s partners have already received tape-outs on 10nm designs and the process is expected to start generating revenues by early 2017. Apple and MediaTek are likely to be the first with 10nm TSMC-based products, while the 12nm node should become a useful enhancement to fill the competition gap before more partners are capable of building 10nm chips.
As we near the end of a pivotal year for virtual reality, it’s clear there is still a lot of work to be done. With the arrival of Oculus Rift, HTC Vive and PlayStation VR, consumers finally have access to the technology that has commanded much of the industry’s attention and excitement for the past four years but only now can we gauge how popular it may become.
That’s according to Aki Järvinen, founder of research and consulting initiative Game Futures, currently working at Sheffield Hallam University. Järvinen will be speaking about trends in virtual reality development at this week’s Develop:VR conference in London. We caught up with him ahead of the event to find out his thoughts on the industry’s next step after this year’s long-awaited hardware launches.
“There is a definite need for VR to find its own voice,” he tells GamesIndustry.biz. “We know very little about user habits with the headsets, for example. How does the isolating, solitary nature of current VR tech affect the frequency of use and thus retention with games? Early data shows that game time spent on VR titles is nowhere close to PC titles in the same genres.
“Kevin Kelly, the former editor of Wired, has talked about how the Internet has proceeded to its current form as streams and flows, from its ‘newspapery’ web origins. I expect something similar to happen with VR games; currently, it’s about imitating existing genres with the added value of VR-enhanced sense of presence, but developers and designers should experiment with other paradigms.
“2016 has been the first proper year for developers to test the waters on if the market is profitable yet and learn about releasing games for the actual retail platforms. Strategic product decisions are being made as we speak, based on these early experiences.”
Many developers have said that virtual reality tears up the game design rulebook, requiring completely new theories and practices when it comes to game creation. By now, studios have poured years into experimenting with VR games and it would be fair to argue that the early pages of that rulebook have been written – but Järvinen believes the conventions and best practices established so far are largely temporary.
“There is a definite need for VR to find its own voice. We know very little about user habits with the headsets, for example.”
Aki Järvinen, Game Futures
With more changes expected from the headsets themselves, plus the accessories and controllers supporting them, Järvinen argues that the time span has been “too short for [findings] to stick” and that gameplay design solutions in use now will be almost irrelevant in just a few years.
“If one looks at games like Batman: Arkham VR, for example, the designers have clearly tried to turn the current constraints of the platform – lack of movement in particular – to their advantage, and design gameplay around the constraints,” he says. “They’ve done this with very deliberately crafted, static setpieces that leverage VR’s other strengths, such as experiencing the scope and scale of things in a more startling, life-like way. Yet, once those movement constraints go away, it’s hard to see anyone designing in that paradigm anymore. So it’s an agile rulebook in constant change.”
The future of virtual reality will, therefore, be defined by its hardware rather than its software, and the Game Futures founder predicts significant evolution from the devices people are picking up in stores this Christmas.
“VR has enough momentum now that it will go along the typical development path of similar technologies,” says Järvinen. “Headsets will become smaller, untethered, of higher resolution, trackers invisible, and so on. When these developments are able to coincide with lower production costs to the degree that retail price points become truly affordable, then we are on the cusp of a real breakthrough. Parallel to this, software has to evolve.”
It’s easy to argue that virtual reality software is already quite unevolved. With a handful of more ambitious or high-production projects being the exceptions, the vast majority of launch software for Oculus, Vive and PSVR is limited. Most current virtual reality titles offer a more immersive first-person perspective for long-established gameplay genres, with little more than the novelty of viewing the action through the headset to differentiate it from what has come before. Perhaps the most blatant examples are the waves of shooting gallery-style VR games, where players are restricted to either an on-rails experience, a gun turret or standing on the spot, blasting away at waves of enemies that appear in often scripted patterns.
Järvinen says the prominence of these games so far is “a concern” but believes that as the market evolves, both in terms of hardware and software, “the lesser formulas will wither out”.
He adds: “So far developers have benefited from the rush of early adopters who basically purchase or download everything. This might lead to vanity metrics, such as bloated download figures, or bloated revenue estimates, as there has been lots of free promotions, bundles, and so on. But the VR market cannot be sustained with spikes from early adopters and therefore the more inherently ‘VR’ titles and game design aspects will eventually prevail.
“VR in its current form still has too many disabling contexts in play, such as retail price, PC requirements, and the fact that many people experience nausea. While finding the new genres is important, they do not matter much if enough enabling contexts are not yet in place, and that means also cultural ones – such as social acceptability in a living room, or in public places with mobile VR – rather than just technical ones.”
The cultural challenges that virtual reality faces are by far the most significant. 2016 has seen VR find the audience it was originally intended for and would inevitably appeal to the most (that is, avid consumers of video games and emerging technology) but hopes remain high that the tech will grow to have mainstream appeal. Certainly, that seems to be the intention of Facebook, which acquired Oculus back in 2014 and earlier this year showed off new social communication functions such as virtual chat rooms at September’s Connect event.
“While finding the new genres is important, they do not matter much if enough enabling contexts are not yet in place, and that means also cultural ones – such as social acceptability in a living room, or in public places with mobile VR – rather than just technical ones.”
Aki Järvinen, Game Futures
Järvinen believes the social network has spent enough effort and money on virtual reality that “they’ve gone past the point of abandoning creating its mass-market appeal” but suggests future forms of the hardware will have more impact on the technology’s attractiveness than the companies backing it.
“True mainstream appeal would require technological developments, such as miniaturisation, but also use cases where users see obvious benefits. Facebook seems to bet on the social dimension being the latter. Creating accessible tools for VR content creation could be the home run.”
As such, we can expect to see more companies from beyond the games industry investing in the technology and those developing for it. While it might not reach the headline-grabbing heights of Facebook’s $2bn Oculus acquisition, there is little danger of funding for virtual reality projects drying up any time soon.
“The wow factor with VR is strong enough that, when executed innovatively by a capable team, investors will get on board,” Järvinen says. “Therefore I believe investments will stay steady but perhaps we won’t see news about the more exuberant sums before the market finds its own Supercell.”
Järvinen concludes by stressing that the non-games, even non-entertainment, applications for virtual reality will go a long way to not only broadening the technology’s appeal, but writing more pages of that agile rulebook.
“We should not forget applications of VR beyond games and entertainment,” he says. “I believe journalism can use similar aspirations for a heightened feeling of empathy, achieved by leveraging that sense of presence VR can produce. We are already seeing signs of this with 360 video pieces distributed via VR platforms.
“Lots of interesting stuff is also going on in medical applications and research, such as burn victim therapy via VR. Real estate market could benefit in a big way from virtual viewings. So VR will not have one end goal, but many.”
Four SKUs ranging between $150 and $500
A new pricing document originating from China indicates that AMD initially plans to release four Zen desktop SKUs in four, six and eight-core variants. Just like Intel’s high-end desktop lineups, none of these chips will feature integrated graphics.
At the top of the list is the Zen SR7 “Special” featuring eight cores, sixteen threads and priced at $500, followed by a standard Zen SR7 in the same core configuration for $350. In the mid-range segment is the Zen SR5, featuring six cores, twelve threads and priced at $250. In the entry-level segment is the Zen SR3, featuring four cores and eight threads and priced at $150.
Last week in a Maxsun email posted on Baidu, there were indications that high-end Zen chips would be priced up to ¥2,000 ($290), yet the latest leak now says they will go as high as ¥3,999 ($500) for the SR7 Special Edition, while the mid-range SR5 will be priced closer to the initial estimate.
As for specifications, the email also mentioned that Zen chips should have base frequencies between 3.15 to 3.30GHz with 3.5GHz Boost clocks.
Zen SR7 engineering sample runs at 3.2GHz
Now, a new engineering sample of an eight-core Zen SR7 has been spotted by reliable AMD blogger DresdenBoy, who shared that the part number (1D3201A2M88F3_35/32_N) indicates a 3.2GHz chip with 3.5GHz Boost. Back in August, two eight-core Zen engineering samples appeared in a benchmark database with part numbers ending with “32/28_N,” indicating that they were running at 2.8GHz with 3.2GHz Boost.
Performs like Core i7 6950X at half the price
Even taking these price points into consideration, an eight-core Zen SR7 at $500 may still perform similarly to Intel’s eight-core Core i7 5960X at $1,000, given Zen’s more competitively-focused IPC design. The company’s switch back to Simultaneous Multi-Threading (hyperthreading) allows each core to run two threads just like Intel chips, so even the ten-core Core i7 6950X with a 3GHz base and 3.5GHz Turbo is a benchmark to consider.
The folks at Guru3D say Zen chips should have four integer units, two address generation units and four FP units, while decoding four instructions per clock. Compared to Bulldozer, bandwidth for L1 and L2 cache should be almost twice as fast, with each Zen core featuring the same amount of L3 cache per core as Intel.
Zen Summit Ridge series processors are currently expected to launch on January 17th following a CES announcement during the first week.
AMD has released the latest version of its ROCm software tools which make it easier to write and compile parallel programs for its new Zen GPUs and CPUs.
The software is designed to help put Zen under the bonnet of high-performing servers to turn GPUs and CPU combos into servers. If it all pays off AMD could be back in the server market after losing it totally to Intel.
ROCm provides a base for the company to build GPUs for large-scale servers. It is a low-level programming framework like Nvidia’s CUDA. But it’s open source and can work with a wide range of CPU architectures like ARM, Power, and x86.
According to PC World the ROCm platform is targeted at the large-scale server installations and for multiple GPUs in a cluster of racks.
It’ll work with AMD’s latest Radeon Pro GPUs and current consumer GPUs based on the Polaris architecture. It can be used to run neural networking clusters or for scientific computing.
AMD has not revealed any of its supercomputing GPU plans but said ROCm will play a big role as the company goes after the HPC space.
ROCm is based around the Heterogeneous System Architecture (HSA) spec which is supposed to link the computing power of CPU, GPU, and other processors in a system. AMD thinks HSA specifications could replace OpenCL, which is widely used today for parallel programming.
But what is more interesting about it is that AMD is chasing open-source standards, contrary to Intel which still wants people to use its proprietary standards. This open saucy approach might be the novelty which helps AMD succeeds. The Open Source does well in the HPC area where stuff is a little more collaborative. It might be that AMD has hit on a system that works and can get its foot in the door.
One of the things that we are noticing is that all the leaks and other information coming out of Intel, suggests that the outfit is getting excited about the overclocking market.
A lot of the marketing buzz about Kaby Lake architecture on the desktop by focusing on overclocking performance. Intel has several unlocked processors based on Kaby Lake, and they are not just at the high end.
Already overclockable Kaby Lake Core i7 and Core i5 processors have been leaked but the trend is suggesting that Intel will target cash strapped system builders with at least one unlocked Core i3 series processor, that is the dual-core processor Core i3-7350K. The retail box version will be sold for $177 which means that street pricing could end up being anywhere from $150 to $180.
The Core i3-7350K will have Hyper Threading support and is fast already with a base clock speed of 4GHz and a boost frequency of 4.2GHz. It is unclear how much overclocking you will get on top of that. But if you can get a couple of of hundred MHz with air cooling and a TDP rating of 61W as expected you could get a cost-effective chip, if it does not turn into a pile of molten plastic in your computer.
Kaby Lake is not that exciting to enthusiasts, but Intel seems to want to get a few more overclockers interested at the lower end of the market. A sub-$200 part that could open overclocking to a wider audience might just work.
It is a moot point if this will do much for sales. Overclocking is useful if you know what you are doing, and most buying at that price range either don’t know what they are doing, or are too scared to try it.
Nvidia just announced its fiscal Q3 2017 (not a typo Ed.) and the company reported record revenue of $2 billion, up 54 percent from a year ago. This is finally proof that Nvidia is more than a GPU outfit, as it finally expanded beyond its core business – gaming and professional graphics.
It took Nvidia a lot of sweat and a lot of failed products to get to where it is today. CEO Jen-Hsun Huang is not afraid of failure and as the CEO he took a lot of risks, knowing that one of them would eventually pay off. Don’t get me wrong, most of the money still comes from the gaming division, where Nvidia made $1.244 billion, +63.47% year over year and +58% quarter over quarter. The new Pascal architecture is obviously paying off big time. In recent years, Nvidia executives started talking more about automotive and deep learning, than about gaming. Huang wants to present Nvidia in a new light, to prove that is more than “just” a PC gaming company.
If you have been following Nvidia over the last 23 years as avidly as we have, you will know that many products and acquisitions didn’t pan out. Let me just mention a few obvious ones, starting with the PortalPlayer deal valued at $357 million. At the time, these guys were delivering chips for iPods, shortly before the company lost that deal. It also includes Agea, that promised a gaming physics revolution in 2008, but that failed to change the gaming world. The most recent acquisition was Icera, a modem company bought in 2011 that was supposed to help Nvidia to fight in the mobile market. That didn’t work out either.
Nvidia completely retreated from the mobile phone market after realizing that it could compete on its own terms or change the world of mobile phones. It tried to make its own gaming devices including Tegra-based tablets and TV consoles but didn’t really move mountains, cashing niche success with their own devices. It would have been better had Nvidia not been forced to recall one of its products due to faulty batteries.
But Nvidia realized that self-driving cars and deep learning could hugely benefit from its GPU architecture. This is when Nvidia started to abandon the company tradition of only caring about pixels. The latest quarterly report is the culmination of a a long process of transition from a mobile and GPU-oriented company to something bigger, a company whose processors might end up in millions of cars, helping you safely get from point A to point B. It will also help to train many artificial intelligence systems including robots with the help of deep learning algorithms.
This is not as sexy as some of the ideas Jensen promoted in recent years. Huang always looked up to Steve Jobs and wanted to create a product you cannot live without, something that Jobs managed to do for many products and services. We can single out the iPod, iPhone, iPad and many Macs that gained a fanatical following. Essentially Nvidia GPUs might end up in many cars. Tesla and Volvo are the first to announce the commitment to Drive PX2 assisted driving systems and there will be more announcements in the future.
Nvidia is currently the king of the PC gaming market as well as the king of the professional GPU market with its Quadro product line. Many high-end PCs have Geforce cards in them, representing a stable revenue base, under the helm of industry veteran and Geforce General Manager Jeff Fisher. This division has been doing great for the last few years, putting a lot of pressure on AMD’s Radeon Technology Group, its primary competitor in the desktop and notebook markets. Gaming will remain the main source of income for Nvidia, backed by the Quadro professional graphics business, which is an incredible cash cow. The beauty is that the Quadro uses the same gaming GPU with a special professional driver and sells for a few times the price, and this is where most of the money is coming from – software rather than silicon.
Gaming revenue has almost doubled, data center revenue has tripled, and the automotive market almost doubled as well. Both data center and automotive are still rather small markets, but they will continue to grow in the years to come. However, there is a chance that gaming revenue may decrease due to stiff competition from the AMD Radeon Technology Group.
So, the fact is that Nvidia lost most of its entry level business, due to market trends and the death of discrete entry-level GPUs, Nvidia actually lost a market that wasn’t making that much money after all. Gaming is what Nvidia knows, and which makes them most money on the desktop market, and this is going to continue. There is always a big threat that the Radeon team might come up more competitive products in 2017 and reclaim some of these huge gains from Nvidia.
The future growth of Nvidia will concentrate on automotive and deep learning, as Nvidia can sell a lot of these systems to big companies. Microsoft, Google, Facebook and Baidu see great potential in the deep learning market. SAP, educational institutions, start-ups, oil and gas industry, as well as the financial markets are looking into using deep learning computers and some of them, such as the DGX1, cost more than $100,000 for education and even more for industry use.
Market response to the latest quarterly report was quick. Last time we checked, Nvidia was up more than 29.73 percent to $87.92.
According to sources from motherboard makers, Intel is planning to add USB 3.1 and Wi-Fi functionality to its upcoming 300-series motherboard series scheduled to be released at the end of 2017.
The 200-series motherboard chipsets for Kaby Lake CPUs arriving in early 2017 will contain a few I/O side improvements including 24 PCI-E 3.0 lanes, six native SATA III 6Gbps ports and ten USB 3.0 ports, while it will offer native USB 3.1 support later in the 300-series platform towards the end of the year.
Digitimes reports that the decision will impact several third-party suppliers of Wi-Fi and USB 3.1 chips, including notebook WLAN supplier Broadcom, desktop WLAN supplier Realtek and USB 3.1 supplier ASMedia Technology.
ASMedia expects to see a decrease in orders for USB 3.1 host chips, but says that the standardization of the technology will increase demand for related chips and 10G signal redrivers and retimers, allowing it to place new orders. The company will also supply AMD with USB 3.1 chipsets for upcoming X370, A320 and B350 chipsets based on Socket AM4, which is expected to lower the market impact from Intel’s plan.
Analyst outfit Susquehanna Financial Group gave Intel the thumbs up for its newly developed “laser chips” saying the development could be a game changer that will give the company the data center chip market for years.
Susquehanna analyst Christopher Rolland said that the ‘chip-scale silicon photonics,’ which Intel has developed was one of the most important developments of our generation.
Intel has developed a miniaturized on-die version of its silicon photonics technology to be used as a super-high-speed optical interconnect between its Xeon server processor and an Altera field-programmable gate array, Rolland said.
It is a chip-to-chip, super high-speed, in-package optical interconnect that could revolutionize the semiconductor industry, he said.
Intel has “proof of concept” chips with the new technology and is looking to improve current low production yields. A commercial product could be ready in three to five years, Rolland said.
“This technology is nothing short of miraculous and we view it as a potential game changer for Intel and the semiconductor industry,” Rolland said.
Data transfer rates may start at 50 to 100 gigabits per second with the new chips, but could increase to a half terabit or 2 terabits by early next decade.
Intel is expected to first use the optical interconnect technology to connect CPUs to Altera FPGAs. Then other technologies, including GPUs, ASICs, Xeon Phi, memory, and other CPUs will start using it.
Moving from electrical communication to optical communication technology has several advantages in chips. They include bandwidth density, low latency, energy efficiency and lower cost, Rolland said.
GlobalFoundry’s CTO has said that the industry needs 7nm and its recent IBM purchase is helping him build up a new cunning plan for the technology.
Talking to Digitimes, Gary Patton has the job of building up the foundry house’s 7nm manufacturing technology. He said that the acquisition of IBM’s microelectronics unit was a big help because it had been doing a lot of work in differentiated 45/30/22/14 nm process nodes and improving its algorithm technologies for use in servers. That integration between the two sides will give GlobalFoundries a clearer blueprint for technology development.
Patton said that the 5G industry, as well as mobile computing, IoT and automotive electronics will be the growth drivers for the next decade, particularly 5G products and datacentres which need support of high-performance computing.
He added that GloFo’s FinFET process was divided into two generations, including 14 nm and 7 nm.
“We cooperated with Samsung Electronics in the 14nm process previously, but we have decided to choose a different approach for the 7nm technology and, additionally, the IBM deal has significantly enhanced our resources and development capabilities allowing us to develop the 7nm process in-house,” he said.
GloFlo had decided to jump from 14nm to 7nm directly, while skipping the 10nm process because it believed that 10nm will help not much to improve power consumption and costs for clients.
“The 10nm node is more like a semi-generation process, similar to the previous the 20nm technology, which could not meet clients’ requirements,”he said.
The foundry was getting comments from clients indicating that they need the 7nm products urgently so pouring technology resources into developing the 7nm process made more sense.
GloFo’s internal roadmap has the 7nm process is expected to enter volume production in the first half of 2018 with initial clients including IBM and AMD.
“The 7nm process has a number of advantages, including multi-core, high-speed I/O capabilities, reducing power consumption by 60 per cent, upgrading performance by 30 per cent , cutting costs by 30 per cent doubling the yield rate per wafer, while providing 2.5D/3D packaging services,” he said.
Earlier this week, MediaTek reported a net profit growth of 19 percent during the third quarter of 2016, but now expects revenue momentum to slow down in the fourth quarter as it ramps up production capacity of 28nm chips.
According to company vice chairman Mr. Hsieh Ching-jiang, investors can expect a revenue decline anywhere between seven and 15 percent, or between NT $66.6 billion ($2.11 billion) and NT $72.9 billion ($2.31 billion), while gross margins are expected to remain between 33.5 percent and 36.5 percent.
The reason for the decline is because foundries are still catching up with tight 28nm production demands, and supply constraints are likely to remain until the end of the year. MediaTek also notes that customer demand for certain products is slowing down, including smartphone and tablet chip shipments, and 4G modems.
The company’s overall smartphone and tablet chip shipments will decrease from 145 to 155 million units in Q3 to around 135 to 145 million in Q4, or around seven percent. Revenues came in at NT $78.4 billion ($2.49 billion) last quarter, or 37.6 percent higher compared to last year.
Meanwhile, the company is scheduled to announce its next-generation Helio chips at the end of this year, with volume production beginning in 2017. It will announce the high-end X30 SoC based on 10nm design and featuring a Cat 10 modem at the end of this year, followed by some entry-level and mid-range chips featuring Cat 7 modems sometime next year.
Intel has been showing off its next generation of its Atom microprocessors intended for the Internet of Things (IoT).
According to Chipzilla, the Intel Atom E3900 series has been designed from the ground up, and based around the 14-nanometer process chip manufacturing. As a result it has 1.7 times the compute power over older Atoms and can support faster memory speeds and greater memory bandwidth.
Ken Caviasca, vice president of Intel’s Internet of Things group and general manager of platform engineering and development, said that the chip has been built into a compact flip chip ball grid array (FCBGA). It is designed for those moments where salable performance, space and power are tricky – like the Internet of Things.
The Intel Atom can manage 3D graphics – improved by 2.9 times compared to the previous Atom generation – and can support three independent displays.
The Atom was originally designed to power netbooks so they had some video thrown into the mix. The new E3900 series has four vector image processing units, resulting in better visibility, quality video in low light, noise reduction, and color and detail preservation.
One of the more important parts of the chip is the ability to keep devices synchronized via the Intel Time Coordinated Computing feature.
“By synchronizing clocks inside the system-on-a-chip and across the network, Intel Time Coordinated Computing Technology can achieve network accuracy to within a microsecond,” Caviasca said.
Intel appears to have put the brakes on its 3DXPoint memory modules despite talking up the technology for a while.
For those who came in late 3DXPoint is a next-generation non-volatile memory technology and is supposed to be faster than NAND flash memory while much cheaper than DRAM.
Intel had previously said that these 3DXPoint memory modules would be supported on a “future Intel Xeon processor.” It had thought that Intel was referring to the Skylake-EP and this is expected in the first half of 2017.
However, on Intel’s most recent earnings call, CEO Brian Krzanich indicated that this wouldn’t be the case.
According to this, the “future Xeon processor” that will support 3DXPoint memory modules will not be the upcoming Skylake-EP but instead its successor, known as Cannonlake-EP.
If Intel’s upcoming 3DXPoint memory modules require Cannonlake-EP to work, then investors should realistically expect that Intel won’t be selling those modules until either late 2018.
The RAM uses LPDDR4 technology and the 10 nm process. The arrival of 64-bit processors has allowed phone RAM to increase beyond 4GB but few manufacturers could be bothered. Even Samsung passed on it. However, now it seems that with the new generation of RAM Samsung thinks it is worthwhile and will be jumping directly from 4 to 8GB by next year.
LPDDR4 is currently the fastest type of low power memory in the mobile market. Samsung says it is the same as PC-class DDR4 RAM and has twice the speed, operating at 4,266 Mbps, versus the PC’s 2,133 Mbps.
By using 10 nm processing, the DRAM only takes up 15 x 15 x 1 mm and can be stacked above or under other chips. While we can’t see the point of the technology in mobile phones, it does make a lot of sense in tablets.
While Samsung has hinted that it is going to release the technology on the mobile world, by the end of the year so we should see next year’s flagship models with 8Gb next year.