Subscribe to:

Subscribe to :: TheGuruReview.net ::

Intel’s Core X Appears To Be In High Demand

July 21, 2017 by  
Filed under Computing

Intel seems to have made a mistake over the demand for its eight- and 10-core Core X processors for the high-end desktop market.

Word on the street is that you can’t get your paws on eight- and 10-core Core X processors for love or money.

Part of the issue is that the Skylake-X-based Core X chips were based on the same chips Intel sells to its data-centre customers in the form of Xeon Scalable Processors. These punters are given more priority as data-centre customers generally dramatically outnumber the high-end desktop processor customers.

Another thought is that there is a suspicion that the chip’s launch was rushed and there was not enough time to build inventories of some of these Core X chips. As are result Intel did not get enough of the chips to distributors.

Like most rumors it brings up other questions – is Intel having difficulty making enough Core X chips?

Core X was being seen as an aggressive move by Intel, which validated the chips at much higher speeds out of the box than it had done with previous high-end desktop chips. But this has caused Intel a bit of a headache as it has to certify each of the 10 cores found on the 7900X runs at 4.3GHz. In the old days Intel had to deliver 22 percent less.

Intel claimed that the 14-nanometer+ technology used to build the 7900X is about 12% faster than the 14-nanometer technology used to build the 6950X. But to squeeze those sorts of numbers out of the 7900X must have had few design problems.

Then there is the matter of the price, which thanks to AMD has to be lower than it what it could previously got away with. Intel sells the 7900X for $999, while the older 6950X sold for around $1,700. This means that demand for the chip is higher/ All this means that Intel is being required to make more chips which are actually trickier to make.

All this plays into AMD’s hands as it is a lot easier to compete with Intel if it has nothing to sell.

Courtesy-Fud

Pricing For AMD’s Ryzen 1300X Leaked

July 20, 2017 by  
Filed under Computing

According to a Reddit post, upcoming Ryzen 3 SKUs, the Ryzen 3 1300X and the Ryzen 3 1200, will be hitting the market at US $129 and US $109 (exc. VAT), respectively.

While AMD has revealed a lot of details regarding its two upcoming quad-core Ryzen 3 SKUs yesterday, including the clocks and the launch date, we are still missing a couple of key details, including the TDP, amount of cache and the price.

In case you missed it yesterday, both Ryzen 3 SKUs are quad-core parts without SMT (Simultaneous Multi Threading) support, so they will stick to “just” four threads. The Ryzen 1300X works at 3.5GHz base and 3.7GHz Turbo clocks, while the Ryzen 3 1200 works at 3.1GHz base and 3.5GHz Turbo clocks. As rumored earlier, the Ryzen 3 lineup should retain the 2MB/8MB cache (L2/L3) as the Ryzen 5 series and should have the same 65W TDP, although these details are left to be confirmed.

Luckily, a Reddit user has managed to get unconfirmed details regarding the price of these two SKUs, suggesting that they should launch at US $129 for the Ryzen 3 1300X and US $109 for the Ryzen 3 1200, excluding VAT. While the price of the Ryzen 3 1300X sounds about right, and similar to what we heard before, we have our doubts regarding the Ryzen 3 1200 price, which we suspect would be closer to the US $100 mark.

In any case, we’ll know for sure in about two weeks when these parts are scheduled to hit retail/e-tail shelves. It will be quite interesting to see these Ryzen 3 SKUs compared to some Intel Core i3 Kaby Lake dual-core parts as we are quite sure that these will give Intel a hard time in that part of the market, offering significantly higher performance for much less money.

Courtesy-Fud

Intel Launches ‘Purley’ Xeon To Compete With AMD’s Epyc Processors

July 19, 2017 by  
Filed under Computing

Intel has unveiled a new series of ‘Purley’ Xeon server processors based on its new Skylake-SP architecture. 

The new Intel Xeon SP (‘Scalable Platform’) CPUs, which feature up to 28 processor cores per socket and six terabytes of system memory, were unveiled in New York on Tuesday, and the firm claims a 1.65 times performance boost, on average, compared to the prior generation Broadwell-based server CPUs.  

The launch comes just weeks after AMD unveiled its Epyc line of server processors based on the Zen architecture, which offer up to 32 cores per chip. Intel took a dig, naturally, and claims that its top-end Xeon Scalable processor delivers 28 per cent faster performance than AMD’s Epyc 7601. 

Given the higher core count of its new server chips, Intel has created an all-new Mesh architecture design which is claims will offer a “fundamental change” to performance. Unlike the firm’s previous ‘ring’ design (arf), the Mesh architecture arranges the individual cores in a 3D design, offering more direct paths and, in turn, faster performance. 

The new processors, Intel claims, have been designed for growing, and compute-heavy workloads, such as cloud computing, autonomous vehicles, 5G and artificial intelligence, the latter of which will reportedly see a 2.2x performance increase with Xeon Scalable.  

“Data centre and network infrastructure is undergoing massive transformations to support emerging use cases like precision medicine, artificial intelligence and agile network services paving the path to 5G,” said Navin Shenoy, executive vice president and general manager of the Intel Data Center Group.

“Intel Xeon Scalable processors represent the biggest data center advancement in a decade.”

The new Xeon SP processors also deliver a 3.1 times performance improvement generation-over-generation in cryptography performance, according to Intel, which has also built its Key Protection Technology onto the chip to deliver enhanced protection to security key attacks.

The Xeon Processor Scalable Family offers four processor tiers, representing different levels of performance and a variety of integration and acceleration options: Copper, Silver, Gold and Platinum. 

Intel says that it has already begun rolling out the new hardware to customers, with the likes of Google Cloud, AWS and AT&T having already bagged some of the 500,000 units shipped out ahead of Tuesday’s official launch. 

Intel’s server-class Purley processors are tipped to power Apple’s upcoming iMac Pro, which is set to be released in December.

Courtesy-TheInq

Is Intel Worried About AMD’s Epyc Processor

July 19, 2017 by  
Filed under Computing

Intel is clearly feeling a little insecure about AMD’s new Epyc Server processor range based on the RyZen technology.

Intel’s press office retreated to the company safe and pulled out its favorite pink handbag and emerged swinging.

It did a direct comparison between the two, and in one slide, it mentioned that the Epyc processor was ‘inconsistent’, and called it ‘glued together’.

Intel noted that it required a lot of optimisations to get it to work effectively, comparing it to the rocky start AMD had with Ryzen on the desktop. That is pretty much fighting talk, and it has gone down rather badly.

TechPowerUp noted that even though Epyc did contain four dies, it offered some advantages as well, like better yields. On top of that, they noted: “So AMD’s server platform will require optimisations as well because Ryzen did, for incomparably different workloads? History does inform the future, but not to the extent that Intel is putting it here to, certainly. Putting things in the same perspective, is Intel saying that their Xeon ecosystem sees gaming-specific optimizations?”

Intel still has a healthy lead on AMD in the server space. However, since the launch of Ryzen, Intel has seen a significant drop in support in the desktop market.

Trash talking is usually a sign that there is not much difference between products and it never really works – other than to amuse.

AMD announced its line of Epyc processors last month. The range consists of chips between eight and 32 cores, all of which support eight channels of DDR4-2666 memory. Pricing was announced to start from $400.

Courtesy-Fud

Is Hyper-Threading Broken On Intel’s Kaby Lake And Skylake Processors?

July 6, 2017 by  
Filed under Computing

While the war is blazing between Intel and AMD fanboys over the superiority of the latest range of chips, Debian developers have spotted some rather nasty coding in Intel’s latest creations.

During April and May, Intel started updating processor documentation with a new errata note and it turned out that the reason was that Skylake and Kaby Lake silicon has a microcode bug it did not want any one to find out about.

The errata is described in detail on the Debian mailing list, and affects Skylake and Kaby Lake Intel Core processors in desktop, high-end desktop, embedded and mobile platforms, Xeon v5 and v6 server processors, and some Pentium models.

According to the Debian advisory says affected users need to disable hyper-threading “immediately” in their BIOS or UEFI settings, because the processors can “dangerously misbehave when hyper-threading is enabled”.

Symptoms can include application and system misbehavior, data corruption, data loss and voting Donald Trump. We made the last one up.

Henrique de Moraes Holschuh warned that all operating systems, not only Linux, were subject to the bug.

Intel said that under complex micro-architectural conditions, short loops of less than 64 instructions that use AH, BH, CH or DH registers as well as their corresponding wider register (eg RAX, EAX or AX for AH) may cause unpredictable system behaviour.

“This can only happen when both logical processors on the same physical processor are active.”

It might never have been noticed if Mark Shinwell, a developer working on the OCamlL toolchain, had not contacted the Debian team to explain that the OCaml compiler triggered an Intel microcode issue.

Debian’s post notes that Intel has documented the bug and its microcode fixes for Core 6th generation, Core 7th generation, Xeon v5 and v6, and Core 6th generation X series. Mostly this depends on vendors providing BIOS/UEFI updates. However, until they do that, it would be to disable hyper-threading or even shift to AMD.

Courtesy-Fud

AMD Goes EPYC To Take On Intel In The Server Space

June 30, 2017 by  
Filed under Computing

AMD has unveiled the first generation of its Zen-based Epyc server processors as it looks to take on Intel in the data centre market.

We knew this was coming, and AMD on Monday showed off its AMD Epyc 7000 series at an event in Austin, Texas. The lowest-spec offering is the Epyc 7251, which offers eight cores supporting 16 simultaneous threads, and a base frequency of 2.1GHz that tops out at 2.9GHz at maximum boost.

The Epyc 7601 is the firm’s top-of-the-line chip, and packs 32 cores, 64 threads and a base frequency of 2.2GHz, with maximum boost at 3.2GHz. AMD claims that, compared to Intel’s comparable Xeon processor – which offer up to 24 cores – the new Epyc 7601 offers 47 per cent higher performance.

What’s more, AMD claims that each Zen core is about 52 per cent faster per clock cycle than the previous generation, and boasts that the chips are more competitive in integer, floating point, memory bandwidth, and I/O benchmarks and workloads.

“With our Epyc family of processors, AMD is delivering industry-leading performance on critical enterprise, cloud, and machine intelligence workloads,” said Lisa Su, president and CEO of AMD.

“Epyc processors offer uncompromising performance for single-socket systems while scaling dual-socket server performance to new heights, outperforming the competition at every price point. We are proud to bring choice and innovation back to the datacenter with the strong support of our global ecosystem partners.”

Each Epyc processor – of which there are nine different models – also offers eight memory channels supporting up to 2666MHz DDR4 DRAM, 2TB of memory and 128 PCIe lanes. 

Server manufacturers have been quick to introduce products based on AMD Epyc 7000-series processors, including HPE, Dell, Asus, Gigabyte, Inventec, Lenovo, Sugon, Supermicro, Tyan, and Wistron, while the likes of Microsoft, Dropbox and Bloomberg also announced support for Epyc in the data centre. 

Monday’s launch marks the company’s first major foray back into servers and the data centre for almost a decade. The Opteron line of server microprocessors from AMD, first launched in 2003, found its way into an increasing number of the world’s top-100 most powerful supercomputers, peaking in 2010 and 2011 when 33 of the top 100 were powered by AMD Opteron.

Clearly feeling the heat, Intel has taken the bizarre approach of responding to AMD’s Epyc launch, and said that its rivals approach could lead to “inconsistent performance.”

“We take all competitors seriously, and while AMD is trying to re-enter the server market segment, Intel continues to deliver 20+ years of uninterrupted data center innovations while maintaining broad ecosystem investments,” the firm said in a statement.

Our Xeon CPU architecture is proven and battle tested, delivering outstanding performance on a wide range of workloads and specifically designed to maximise data centre performance, capabilities, reliability, and manageability. With our next-generation Xeon Scalable processors, we expect to continue offering the highest core and system performance versus AMD.

“AMD’s approach of stitching together 4 desktop die in a processor is expected to lead to inconsistent performance and other deployment complexities in the data centre.”

Courtesy-TheInq

Intel Dumps Edison And Galileo

June 29, 2017 by  
Filed under Computing

Intel has pulled the plug on its Galileo, Joule, and Edison development boards.

The chip maker quietly made the announcement and few people have actually noticed.

The company said that shipment of all Intel Galileo product skus ordered before the last order date will continue to be available from Intel until December 16, 2017.

Intel will discontinue manufacturing and selling all skus of the Intel Joule Compute Modules and Developer Kits – known as the Intel 500 Series compute modules in the People’s Republic of China.

Shipment of all Intel Joule products SKUs ordered before the last order date will continue to be available from Intel until December 16, 2017.

Last time orders for any Intel Joule products must be placed with Intel by September 16, 2017.

Intel will discontinue manufacturing and selling all SKUs of the Intel Edison compute modules and developer kits. Shipment of all Intel Edison product skus ordered before the last order date will continue to be available from Intel until December 16, 2017.

Last time orders for any Intel Edison products must be placed with Intel by September 16, 2017. All orders placed with Intel for Intel Edison products are non-cancellable and non-returnable after September 16, 2017.

The company has not explained the reasoning why the boards got the death penalty. Intel launched the Galileo, an Arduino-compatible minicomputer in 2013, the Edison in 2014, and the Joule last year.

The company touted the Galileo as part of Intel’s internet of things cunning plans for a while

Courtesy-Fud

Will Xeon Scalable Processing Show Up In Intel Skylake-SP

June 27, 2017 by  
Filed under Computing

While Intel has not got around to releasing its Xeon Scalable Processors it is starting to provide a few more details about the Skylake-SP based microarchitecture.

Intel said that a new mesh interconnect architecture has been designed to increase bandwidth between on-chip elements, while simultaneously decreasing latency, and improving power efficiency and scalability.

Writing in his bog, Akhilesh Kimar, Skylake-SP CPU Architect said: “The task of adding more cores and interconnecting them to create a multi-core data center processor may sound simple, but the interconnects between CPU cores, memory hierarchy, and I/O subsystems provide critical pathways among these subsystems necessitating thoughtful architecture. These interconnects are like a well-designed highway with the right number of lanes and ramps at critical places to allow traffic to flow smoothly…”

In many-core Xeon processors, Intel used a ring interconnect architecture to link the CPU cores, cache, memory, and various I/O controllers on the chips. However, life has become more difficult as the number of cores in the processors, and memory and I/O bandwidth has increased.

Ring architecture requires data to be sent across long stretches to reach its intended destination. The new mesh architecture addresses this limitation by interconnecting on-chip elements better. This increases the number of pathways and improve the efficiency.

Intel showed us this snap of the new mesh architecture.  

Processor cores, on-chip cache banks, memory controllers, and I/O controllers are organised in rows and columns. Wires and switches connect the various on-chip elements and provide a more direct path than the prior ring interconnect architecture. Mesh allows for many more pathways to be implemented, which further minimizes bottlenecks, and also allows Intel to operate the mesh at a lower frequency and voltage, yet still deliver high bandwidth and low latency.

Kimar also says in the post: “The scalable and low-latency on-chip interconnect framework is also critical for the shared last level cache architecture. This large shared cache is valuable for complex multi-threaded server applications, such as databases, complex physical simulations, high-throughput networking applications, and for hosting multiple virtual machines. The negligible latency differences in accessing different cache banks allow thus allowing software to treat the distributed cache banks as one large unified last level cache.”

Intel is also implementing a modular architecture with its Xeon Scalable processors for resources that access on-chip cache, memory, IO, and remote CPUs. These resources are distributed throughout the chip so “hot-spots” in areas that could be bottlenecked are minimized. Intel claims the higher level of modularity with the new architecture allows available resources to better scale as the number of processor cores increases.

Courtesy-Fud

Intel Kaby Lake With AMD GPU Expected Later This Year

June 26, 2017 by  
Filed under Computing

A few days ago, Videocards had an update on the engineering samples that got leaked. 

One thing caught our attention, the Intel(R) HD Graphics Gen9 with 694C:C0 graphics. It took us some time to ask round, and it turns out that a Kaby Lake with AMD graphics combination might be the right lead.

We already told you that Apple is likely to be the customer who nicely asked Intel to make such a Frankenstein chip. What Apple wants, Apple gets is the mantra and it is very hard to say no to hundreds of thousand sales guaranteed by the Apple logo used on any of their products.

It is clear that the platform has two separate GPUs, the Intel HD Graphics Gen9 and 694C:C0. The latter is probably one of AMD’s GPUs and it’s hard to tell if this is all integrated solution sitting in the same package, or rather two separate chip platform. We would go for the all integrated solution, as it makes more sense and saves a lot of space and BOM cost.

Sysoft goes one step forward calling the platform “Intel Kaby lake Client platform Kaby lake Client System (Intel Kaby lake DDR4 RVP17)”. It is listed as a desktop platform too.

GFX data base is very certain that 694C:C0 is an AMD device. This is not by any means a confirmation but it is the first time that we saw an Intel Kaby Lake CPU matched with both Intel Gen 9 graphics and AMD GPU. If you are an investor and looking for ways to make money, bear in mind that we are a news service and not any kind of advisory board. You do it at your own risk and leave us out of it.  

We would expect this Kaby Lake with 694C:C0 solution to ship later this year. 

Courtesy-Fud

Is AMD’s Ryzen 1950X Ready To Hit The Market

June 26, 2017 by  
Filed under Computing

AMD’s Ryzen ThreadRipper 1950X CPU engineering sample, a 16-core/32-thread SKU, has been spotted on Geekbench running at 3.4GHz base clock.

This should be the flagship SKU and it appears it won’t have the 1998X model number, as previously rumored. The engineering sample works at 3.4GHz base clock and was running on an ASRock X399 Professional Gaming motherboard with 16GB of DDR4-2133 memory.

The ThreadRipper 1950X, as it is currently called, packs a massive 32MB of L3 cache and 8MB of L2 cache. Since this is an engineering sample, bear in mind that the performance figures are far from final as AMD will probably further optimize the performance and the sample was not running with lower clocked memory, with no details on the quad- or dual-channel setting.

According to the results posted on Geekbench and spotted by Wccftech.com, the ThreadRipper 1950X managed to get a 4,167 score in the single-thread benchmark and 24,539 points in multi-thread benchmark.

The CPU was compared to Intel’s Xeon E5-2697A 4 CPU, which is also a 16-core/32-thread CPU based on Broadwell architecture and which scores 3,651 in single-thread and 30,450 points in multi-thread performance.

Courtesy-Fud

Will nVidia’s Next GeForce Go HBM2

June 22, 2017 by  
Filed under Computing

Volta is out for artificial intelligence, machine learning applications and it will be shipping in DGX 1 systems, mainly for deep learning and AI. The next Geforce will be a  completely separate chip.

Of course, Nvidia won’t jump and manufacture a high end Geforce card with 21 billion transistors. That would be the Volta that Nvidia CEO Jensen launched back in May. That would be both risky and expensive. One of the key reasons is that Nvidia doesn’t really have to push the technology possibilities as GP102 based 1080 Ti and Titan Xp look really good.

Our well-informed sources tell us that the next Geforce will not use HBM 2 memory. It is too early for that, and the HBM 2 is still expensive. This is, of course, when you ask Nvidia, as AMD is committed to make the HBM 2 GPU – codenamed Vega for more than a year now. Back with “Maxwell”, Nvidia committed to a  better memory compression path and continued to do so with Pascal.

The next Geforce – and its actual codename is still secret – will use GDDR5X memory as the best solution around. We can only speculate that the card is even Volta architecture Geforce VbG. The big chip that would replace the 1080 ti could end up with the Gx104 codename. It is still too early for the rumored GDDR6, that will arrive next year at the earliest.

All eyes are on AMD, as we still have to see the Vega 10 launching. At the last financial analyst conference call, the company committed to launch the HBM 2 based Vega GPU at Siggraph. This year, Siggraph takes place between July 30 and August 3.

AMD’s lack of a higher end card doesn’t really help its financial cause as you need high margin cards to improve your overall sales profits. The fact that the Radeon RX 570 and 580 are selling for miners definitely helps the RTG. The Radeon Technology Group is selling all they can make, and this is a good place to be. The delay for Vega is not so appealing, but again, if this card ends up being a good miners’ card, gamers might have a hard time getting them at all.

Courtesy-Fud

Is Intel’s 10nm Cannon Lake On Schedule

June 21, 2017 by  
Filed under Computing

Intel has officially announced that its 9th generation Core Cannon Lake CPUs, based on 10nm manufacturing process, are on track as well as that its Ice Lake architecture has  taped out.

According to a post over at Intel’s official Twitter account, the company has reached another milestone for 10nm manufacturing process, with Cannon Lake CPUs on track while 2nd-generation Ice Lake CPUs are on track, too.

Despite its confidence in the 10nm manufacturing process, which is only well founded if the yields are good, and upcoming architectures based on that manufacturing process, Intel is obviously feeling pressure from AMD and still needs to launch its 14nm Coffee Lake CPUs.

As you already know, Intel’s Coffee Lake architecture has been rumored to bring the first 6-core/12-thread CPU to the mainstream consumer lineup. The upcoming 10nm Cannon Lake architecture should be a simple die shrink of the Coffee Lake.

According to earlier rumors, Intel is expected to launch its Coffee Lake CPUs in August.

Courtesy-Fud

Is the FinFET Market Ripe for Growth?

June 16, 2017 by  
Filed under Computing

The global FinFET-technology market to grow by 41.89 per cent during 2017-2021, according a new report.

Beancounters at Research and Markets have added up some numbers and divided by their shoe size and worked dashed out a report with the racey title “Global FinFET Technology Market 2017-2021.”

The report considers the sales of FinFET technology process node in different sizes across applications. It covers the market landscape and its growth prospects over the coming years. The report also includes a discussion of the key vendors operating in this market.

For those who came in late, FinFET is a 3D transistor and is integral for the design and development of processors. FinFET technology is a nonplanar, double gate transistor, built on a silicon on insulator substrate. FinFET is a 3D structure that has subdivided resistance and capacitance when compared to a planar structure. FinFETs have better device optimisation in comparison with planar technology.

One trend in the market is innovation in channel materials for development of 10nm and beyond FinFET chips. The 14nm FinFET-based chips use silicon channels that are not stable beyond this scale. With the 10nm technology, SiGe-based FinFET technology demonstrated enhanced performance, providing elegant solutions for CMOS technology.

According to the report, one driver in the market is strategic collaborations and M&A.

The strategic collaborations between the top players in the market are driving the global FinFET technology market. Strategic collaborations and M&A allow vendors to gain access to new technologies. This enables vendors to develop the ecosystem and design novel products with innovative technologies.

The report states that one challenge in the market is fluctuations in foreign currency exchange rate. Fluctuations in the foreign currency exchange rate have a huge impact on the revenue realized by companies.

Vendors in the global FinFET technology market have their presence in several countries. Therefore, the fluctuations in the exchange rate do affect not only the selling price of the product but also the costs and expenses of the company and its foreign subsidiaries.

Courtesy-Fud

Could AMD’s Threadripper Undercut Intel’s 7900X

June 15, 2017 by  
Filed under Computing

According to a fresh report, AMD’s entry-level 16-core Threadripper CPU could cost as low as US $849.

According to the report coming from eTeknix.com, the reported that the entry-level 16-core/32-threads Threadripper SKU, also known as the Threadripper 1998, which works at 3.2GHz base and 3.6GHz Turbo clock, lacks eXtended Frequency Range (XFR) feature and has a 155W TDP, could launch with a US $849 price.

If this rumor turns out to be true, AMD will significantly hurt Intel as this Threadripper will end up cheaper than Intel’s 10-core 7900X, which has a US $999 price tag (tray 1KU).

Although it could end up being slower than Intel’s 10-core chip in some scenarios, like gaming, the sheer number of cores and threads it offers would make it a great CPU for some CPU intensive tasks.

Hopefully, AMD will manage to bring more competition to the CPU market as it would both drive the prices down as well as most likely bring better CPUs in the future.

Courtesy-Fud

Is Apple Behind The Intel and AMD CPU-GPU Teaming

June 13, 2017 by  
Filed under Computing

It has already mentioned the existence of an Intel based CPU with Radeon graphics. Our sources are very confident that Apple is the company behind this order.

We called this deal licensing that led to many false conclusions but if Intel wants to use Radeon graphics it implies that there is some deal/license/cross license in place.

Apple doesn’t want to use Nvidia graphics in any of its products and it currently only uses Radeon based GPUs discrete form for both its desktops and notebooks. It turns out that a whole integrated solution with Intel CPU and Radeon graphics is something that Apple was very interested in.

The Mac Book Pro 13-inch currently uses Intel Iris Graphics 540 or Intel Iris Graphics 550 graphics and this is not really enough for high-end users. A Radeon core in the same thermal envelope will simply offer more.

The 15-inch Mac Book Pro comes with a Radeon Pro 450 with 2GB of GDDR5 memory and Intel HD Graphics 530 or Radeon Pro 455 with 2GB of GDDR5 memory and automatic graphics switching to Intel HD Graphics 530.

An industry veteran, once the CEO of ATI Technology, explained to me back in 2007 why AMD needed to integrate the Radeon core on its CPU. If you have been with us for a while, this became known as Fusion. It all comes down to the fact that an integrated solution is cheaper, and the thermal envelope goes down.

A notebook with an Intel CPU and Radeon graphics will simply be more powerful and performance-per-watt efficient than a notebook with discrete CPU and discrete Radeon GPU. More importantly, the price of the APU – an all integrated CPU with a GPU – will be much lower than for two separate chips.

It also enables a smaller and cheaper motherboard. Apple is likely to be the first company to announce an Intel based CPU powered with Radeon graphics and Intel should pitch this project to other manufacturers too. We have heard that many people call this Kaby Lake – G, something that Bench Life mentioned in April this year.

Courtesy-Fud

Next Page »