Subscribe to:

Subscribe to :: TheGuruReview.net ::

Ubuntu Goes 64-bit On ARM

April 18, 2014 by Michael  
Filed under Computing

Canonical has announced its latest milestone server release, Ubuntu 14.04 LTS.

The company, which is better known for its open source Ubuntu Linux desktop operating system, has been supplying a server flavor of Ubuntu since 2006 that is being used by Netflix and Snapchat.

Ubuntu 14.04 Long Term Support (LTS) claims to be the most interoperable Openstack implementation, designed to run across multiple environments using Icehouse, the latest iteration of Openstack.

Canonical product manager Mark Baker told The INQUIRER, “The days of denying Ubuntu are over, and the cloud is where we can make a difference.”

Although Canonical regular issues incremental releases of Ubuntu, LTS releases such as this one represent landmarks for the operating system, which only come about ever two years. LTS releases are also supported for a full five years.

New in this Ubuntu 14.04 LTS release are Juju and Maas orchestration and automation tools and support for hyperscale ARM 64-bit computing such as the server setup recently announced by AMD.

Baker continued, “We’re not an enterprise vendor in the traditional sense. We’ve got a pretty good idea of how to do it by now. Openstack is gaining a more formal status as enterprise evolves to adopt cloud based solutions, and we are making a commitment to support it.

“Openstack Iceberg is also considered LTS and as such will be supported for five years.”

Scalability is another key factor. Baker said, “We look at performance. For the majority of our customers it’s about efficiency – how rapidly we can scale up and scale in, and that’s something Ubuntu does incredibly well.”

Ubuntu 14.04 LTS will be available to download from Thursday.

Courtesy-TheInq

Red Hat Goes Atomic

April 17, 2014 by Michael  
Filed under Computing

The Red Hat Summit kicked off in San Francisco on Tuesday, and continued today with a raft of announcements.

Red Hat launched a new fork of Red Hat Enterprise Linux (RHEL) with the title “Atomic Host”. The new version is stripped down to enable lightweight deployment of software containers. Although the mainline edition also support software containers, this lightweight version improves portability.

This is part of a wider Red Hat initiative, Project Atomic, which also sees virtualisation platform Docker updated as part of the ongoing partnership between the two organisations.

Red Hat also announced a release candidate (RC) for Red Hat Enterprise Linux 7. The beta version has already been downloaded 10,000 times. The Atomic Host fork is included in the RC.

Topping all that is the news that Red Hat’s latest stable release, RHEL 6.5 has been deployed at the Organisation for European Nuclear Research – better known as CERN.

The European laboratory, which houses the Large Hadron Collider (LHC) and was birthplace of the World Wide Web has rolled out the latest versions of Red Hat Enterprise Linux, Red Hat Enterprise Virtualisation and Red Hat Technical Account Management. Although Red Hat has a long history with CERN, this has been a major rollout for the facility.

The logging server of the LHC is one of the areas covered by the rollout, as are the financial and human resources databases.

The infrastructure comprises a series of dual socket servers, virtualised on Dell Poweredge M610 servers with up to 256GB RAM per server and full redundancy to prevent the loss of mission critical data.

Niko Neufeld, deputy project leader at the Large Hadron Collider, said, “Our LHCb experiment requires a powerful, very reliable and highly available IT environment for controlling and monitoring our 70 million CHF detectors. Red Hat Enterprise Virtualization is at the core of our virtualized infrastructure and complies with our stringent requirements.”

Other news from the conference includes the launch of Openshift Marketplace, allowing customers to try solutions for cloud applications, and the release of Red Hat Jboss Fuse 6.1 and Red Hat Jboss A-MQ 6.1, which are standards based integration and messaging products designed to manage everything from cloud computing to the Internet of Things.

Courtesy-TheInq

Can AMD’s A1 Challenge Intel’s Bay Trail?

April 11, 2014 by Michael  
Filed under Computing

AMD has released its first “system in a socket” single accelerated processor unit (APU) that aims to reduce the cost of entry-level PCs.

Based on the firm’s Kabini system on chip (SoC), the APU is named the “AM1 Platform”, combining most system functions into one chip, with the motherboard and APU together costing around between $39 and $59.

Launched at the beginning of March and released today in North America, AMD’s AM1 Platform is aimed at markets where entry-level PCs are competing against other low-cost devices.

“We’re seeing that the market for these lower-cost PCs is increasing,” said AMD desktop product marketing manager Adam Kozak. “We’re also seeing other devices out there trying to fill that gap, but there’s really a big difference between what these devices can do versus what a Windows PC can do.”

The AM1 Platform combines an Athlon or Sempron processor with a motherboard based on the FS1b upgradable socket design. These motherboards have no chipset, as all functions are integrated into the APU, and only require additional memory modules to make a working system.

The AM1 SoC has up to four Jaguar CPU cores and an AMD Graphics Core Next (GCN) GPU, an on-chip memory controller supporting up to 16GB of DDR3-1600 RAM, plus all the typical system input and output functions, including SATA ports for storage, USB 2.0 and USB 3.0 ports, as well as VGA and HDMI graphics outputs.

AMD’s Jaguar core is best known for powering both Microsoft’s Xbox One and Sony’s Playstation 4 (PS4) games consoles. The AM1 Platform supports Windows XP, Windows 7 and Windows 8.1 in 32-bit or 64-bit architectures.

AMD said that it is going after Intel’s Bay Trail with the AM1 Platform, and expects to see it in small form factor desktop PCs such as netbooks and media-streaming boxes.

“We see it being used for basic computing, some light productivity and basic gaming, and really going after the Windows 8.1 environment with its four cores, which we’ll be able to offer for less,” Kozak added.

AMD benchmarked the AM1 Platform against an Intel Pentium J2850 with PC Mark 8 v2 and claimed it produced double the performance of the Intel processor. See the table below.

The FS1b upgradable socket means that users will be able to upgrade the system at a later date, while in Bay Trail and other low-cost platforms the processor is mounted directly to the motherboard.

AMD lifted the lid on its Kabini APU for tablets and mainstream laptops last May. AMD’s A series branded Kabini chips are quad-core processors, with the 15W A4-5000 and 25W A6-5200 clocked at 1.5GHz and 2GHz, respectively.

Courtesy-TheInq

Is AMD’s Graphic’s Push Paying Off?

April 9, 2014 by Michael  
Filed under Computing

It appears that AMD’s professional graphics push is finally starting to pay off.

AMD’s graphics business is chugging along nicely, thanks to the success of Hawaii-based high-end cards, solid sales of rebranded mainstream cards, plenty of positive Mantle buzz and of course the cryptocurrency mining craze, which is winding down.

However, AMD traditionally lags behind Nvidia in two particular market segments – mobile graphics and professional graphics. Nvidia still has a comfortable lead in both segments and its position in mobile is as strong as ever, as it scored the vast majority of Haswell design wins in 2013. However, AMD is fighting back in the professional market and it is slowly gaining ground.

Mac Pro buckets boost FirePro sales

Last year AMD told us at the sidelines of its Hawaii launch event that is has high hopes for its professional GPU line-up moving forward.

This was not exactly news. At the time it was clear that AMD GPUs would end up in Cupertino’s latest Mac Pro series. The question was how much AMD stands to gain, both in terms of market share and revenue.

Although we are not fans of Apple’s marketing hype and hysteria associated with its consumerish fanboys, we have to admit that we have a soft spot new Mac Pro buckets. The bucket form factor is truly innovative and as usual the Mac Pro has the brains to match its looks. Basically it’s Apple going back to its roots.

Late last year it was reported that AMD would boost its market share in the professional segment to 30 percent this year, up from about 20 percent last year. For years Nvidia outsold AMD by a ratio of four to one in the professional space. The green team still has a huge lead, but AMD appears to be closing the gap.

It is hard to overstate the effect of professional graphics on Nvidia’s bottom line. The highly successful Quadro series always was and still is Nvidia’s cash cow. AMD is fighting back with competitive pricing and good hardware. In addition, the first Hawaii-based professional cards are rolling out as we speak. AMD’s new FirePro W9100, its first professional product based on Hawaii silicon, was announced a couple of weeks ago.

Can AMD keep it up?

2014 will be a good year for AMD’s professional graphics business, but it still remains to be seen whether the winning streak will continue. Apple does not care about loyalty, it’s not exactly a monogamous hardware partner. Apple has a habit of shifting between Nvidia and AMD graphics in the consumer space, so we would not rule out Nvidia in the long run. It might be back in future Mac Pro designs, but AMD has a few things working in its favour.

One of them is Adobe’s love of Open CL, which makes AMD’s professional offerings a bit more popular than Nvidia products in some circles. Adobe CC loves Open CL and AMD has been collaborating with Adobe for years to improve it. Support now extends to SpeedGrade CC, After Effects CC, Premiere, Adobe Media Encoder CC and other Adobe products.

Pricing is another important factor, as AMD has a tradition of undercutting Nvidia in the professional segment. When you happen to control 20 percent of the market in a duopoly, competitive pricing is a must.

Also, changing vendors in the professional arena is a bit trickier than swapping out a consumer graphics card or mobile GPU in a Macbook. This is perhaps AMD’s biggest advantage at the moment. Maintaining such design wins is quite a bit easier than winning them. AMD learned this lesson the hard way. Nvidia did not have to, at least not yet.

According to Seeking Alpha, demand for Mac Pro buckets is “crazy-high” and delivery times range from five to six weeks. Seeking Alpha goes on to conclude that AMD could make about $800,000,000 off a two-year Mac Pro design win, provided Apple sells 500,000 units over the next two years. At the moment it appears that Apple should have no trouble shipping half a million units, and then some.

If AMD manages to hold onto the Mac Pro deal, it stands to make a pretty penny over the next couple of years. However, if it also manages to seize more design wins in Apple consumer products, namely iMacs and Macbooks, AMD could make a small fortune on Cupertino deals alone.

Bear in mind that AMD’s revenue last year was $5.3 billion, so $800 million over the course of two years is a huge deal – even without consumer products in iMacs and Macbooks.

Courtesy-Fud

 

Can DirectX-12 Give Mobile Gaming A Boost?

March 31, 2014 by Michael  
Filed under Gaming

Microsoft announced DirectX 12 just a few days ago and for the first time Redmond’s API is relevant beyond the PC space. Some DirectX 12 tech will end up in phones and of course Windows tablets.

Qualcomm likes the idea, along with Nvidia. Qualcomm published an blog post on the potential impact of DirectX 12 on the mobile industry and the takeaway is very positive indeed.

DirectX 12 equals less overhead, more battery life

Qualcomm says it has worked closely with Microsoft to optimise “Windows mobile operating systems” and make the most of Adreno graphics. The chipmaker points out that current Snapdragon chipsets already support DirectX 9.3 and DirectX 11.  However, the transition to DirectX 12 will make a huge difference.

“DirectX 12 will turbocharge gaming on Snapdragon enabled devices in many ways. Just a few years ago, our Snapdragon processors featured one CPU core, now most Snapdragon processors offer four. The new libraries and API’s in DirectX 12 make more efficient use of these multiple cores to deliver better performance,” Qualcomm said.

DirectX 12 will also allow the GPU to be used more efficiently, delivering superior performance per watt.

“That means games will look better and deliver longer gameplay longer on a single charge,” Qualcomm’s gaming and graphics director Jim Merrick added.

What about eye candy?

Any improvement in efficiency also tends to have a positive effect on overall quality. Developers can get more out of existing hardware, they will have more resources at their disposal, simple as that.

Qualcomm also points out that DirectX 12 is also the first version to launch on Microsoft’s mobile operating systems at the same time as its desktop and console counterparts.

The company believes this emphasizes the growing shift and consumer demand for mobile gaming. However, it will also make it easier to port desktop and console games to mobile platforms.

Of course, this does not mean that we’ll be able to play Titanfall on a Nokia Lumia, or that similarly demanding titles can be ported. However, it will speed up development and allow developers and publishers to recycle resources used in console and PC games. Since Windows Phone isn’t exactly the biggest mobile platform out there, this might be very helpful and it might attract more developers.

Courtesy-Fud

AMD, Intel and nVidia Go All In For OpenGL

March 25, 2014 by Michael  
Filed under Computing

AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year’s Game Developers Conference (GDC).

Sharing a stage at the event in San Francisco, the three major chip designers explained how, with a little tuning, OpenGL can offer developers between seven and 15 times better performance as opposed to the more widely recognised increases of 1.3 times.

AMD manager of software development Graham Sellers, Intel graphics software engineer Tim Foley and Nvidia OpenGL engineer Cass Everitt and senior software engineer John McDonald presented their OpenGL techniques on real-world devices to demonstrate how these techniques are suitable for use across multiple platforms.

During the presentation, Intel’s Foley talked up three techniques that can help OpenGL increase performance and reduce driver overhead: persistent-mapped buffers for faster streaming of dynamic geometry, integrating Multidrawindirect (MDI) for faster submission of many draw calls, and packing 2D textures into arrays, so texture changes no longer break batches.

They also mentioned during their presentation that with proper implementations of these high-level OpenGL techniques, driver overhead could be reduced to almost zero. This is something that Nvidia’s software engineers have already claimed is impossible with Direct3D and only possible with OpenGL (see video below).

Nvidia’s VP of game content and technology, Ashu Rege, blogged his account of the GDC joint session on the Nvidia blog.

“The techniques presented apply to all major vendors and are suitable for use across multiple platforms,” Rege wrote.

“OpenGL can cut through the driver overhead that has been a frustrating reality for game developers since the beginning of the PC game industry. On desktop systems, driver overhead can decrease frame rate. On mobile devices, however, driver overhead is even more insidious, robbing both battery life and frame rate.”

The slides from the talk, entitled Approaching Zero Driver Overhead, are embedded below.

At the Game Developers Conference (GDC), Microsoft also unveiled the latest version of its graphics API, Directx 12, with Direct3D 12 for more efficient gaming.

Showing off the new Directx 12 API during a demo of Xbox One racing game Forza 5 running on a PC with an Nvidia Geforce Titan Black graphics card, Microsoft said Directx 12 gives applications the ability to directly manage resources to perform synchronisation. As a result, developers of advanced applications can control the GPU to develop games that run more efficiently.

Courtesy-TheInq

Is Firmware A Security Threat?

March 20, 2014 by Michael  
Filed under Computing

Canonical’s Mark Shuttleworth wants everyone to abandon proprietary firmware code because it is a “threat vector.”

Writing in his blog, Shuttleworth said that manufacturers are too incompetent, and hackers too good for security-by-obscurity in firmware to ever work. Any firmware code running on your phone, tablet, PC, TV, wifi router, washing machine, server, or the server running the cloud your SaaS app is running on is a threat, he said.

“Arguing for ACPI on your next-generation device is arguing for a trojan horse of monumental proportions to be installed in your living room and in your data centre. I’ve been to Troy, there is not much left,” he moaned.

Shuttleworth wants the industry to use Linux and avoid firmware that has executable code. He writes: “Declarative firmware that describes hardware linkages and dependencies but doesn’t include executable code is the best chance we have of real bottom-up security.”

Courtesy-Fud

nVidia Releases CUDA 6

March 10, 2014 by Michael  
Filed under Computing

Nvidia has made the latest GPU programming language CUDA 6 Release Candidate available for developers to download for free.

The release arrives with several new features and improvements to make parallel programming “better, faster and easier” for developers creating next generation scientific, engineering, enterprise and other applications.

Nvidia has aggressively promoted its CUDA programming language as a way for developers to exploit the floating point performance of its GPUs. Available now, the CUDA 6 Release Candidate brings a major new update in unified memory access, which lets CUDA applications access CPU and GPU memory without the need to manually copy data from one to the other.

“This is a major time saver that simplifies the programming process, and makes it easier for programmers to add GPU acceleration in a wider range of applications,” Nvidia said in a blog post on Thursday.

There’s also the addition of “drop-in libraries”, which Nvidia said will accelerate applications by up to eight times.

“The new drop-in libraries can automatically accelerate your BLAS and FFTW calculations by simply replacing the existing CPU-only BLAS or FFTW library with the new, GPU-accelerated equivalent,” the chip designer added.

Multi-GPU Scaling has also been added to the CUDA 6 programming language, introducing re-designed BLAS and FFT GPU libraries that automatically scale performance across up to eight GPUs in a single node. Nvidia said this provides over nine teraflops of double-precision performance per node, supporting larger workloads of up to 512GB in size, more than it’s supported before.

“In addition to the new features, the CUDA 6 platform offers a full suite of programming tools, GPU-accelerated math libraries, documentation and programming guides,” Nvidia said.

The previous CUDA 5.5 Release Candidate was issued last June, and added support for ARM based processors.

Aside from ARM support, Nvidia also improved Hyper-Q support in CUDA 5.5, which allowed developers to use MPI workload prioritisation. The firm also touted improved performance analysis and improved performance for cross-compilation on x86 processors.

Courtesy-TheInq

Is AMD Worried About Microsoft’s DirectX 12

March 7, 2014 by Michael  
Filed under Computing

AMD’s Mantle has been a hot topic for quite some time and despite its delayed birth, it has finally came delivered performance in Battlefield 4. Microsoft is not sleeping it has its own answer to Mantle that we mentioned here.

Oddly enough we heard some industry people calling it DirectX 12 or DirectX Next but it looks like Microsoft is getting ready to finally update the next generation DirectX. From what we heard the next generation DirectX will fix some of the driver overhead problems that were addressed by Mantle, which is a good thing for the whole industry and of course gamers.

AMD got back to us officially stating that “AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle. “

AMD also told us that we can expect some information about this at the Game Developers Conference that starts on March 17th, or in less than two weeks from now.

We have a feeling that Microsoft is finally ready to talk about DirectX Next, DirectX 11.X, DirectX 12 or whatever they end up calling it, and we would not be surprised to see Nvidia 20nm Maxwell chips to support this API, as well as future GPUs from AMD, possibly again 20nm parts.

Courtesy-Fud

Intel’s Broadwell NUC Expected In October

February 25, 2014 by Michael  
Filed under Uncategorized

Intel’s NUC is about to get its biggest overhaul yet later this year. The tiny barebones should get Broadwell-based Core i3 and Core i5 processors, but that’s not all.

It appears that Intel is planning to introduce completely redesigned boxes with plenty of new features. Codenamed Rock Canyon, the new NUC kits will feature miniHDMI and miniDP video outputs, allowing triple display support and 4K/UHD support.

In the storage department, Intel went for a standard 2.5-inch bay backed by an M.2 SSD. This means users will be able to use a small SSD as a system drive along with cheap mechanical storage. On the other hand, the M.2 form factor is anything but popular at this point. All new NUCs will feature USB 3.0 and in terms of connectivity they’ll have built-in WiFi and Bluetooth, IR sensor for HTPC remote controls and replaceable lids with NFC and Wireless Charging.

That’s not all though. Rock Canyon is the mainstream kit, but another one is on the way. Maple Canyon is a “professional” unit and it features Intel vPro technology and TPM hardware, but it does not have an IR sensor or lids with wireless charging.

So, while Broadwell probably won’t appear in the form of socketed desktop CPUs, you’ll still be able to buy a Broadwell desktop, albeit a NUC.

Courtesy-Fud

Intel Releases New Xeon Chipset

February 21, 2014 by Michael  
Filed under Computing

Intel has released details about its new Xeon E7 v2 chipset. The Xeon processor E7 8800/4800/2800 v2 product family is designed to support up to 32-socket servers with configurations of up to 15 processing cores and up to 1.5 terabytes of memory per socket.

The chip is designed for the big data end of the Internet of Things movement, which the processor maker projected will grow to consist of at least 30 billion devices by 2020. Beyond two times better performance power, Intel is promising a few other upgrades with the next generation of this data-focused chipset, including triple the memory capacity, four times the I/O bandwidth and the potential to reduce total cost of ownership by up to 80 percent.

The 15-core variants with the largest thermal envelope (155W) run at 2.8GHz with 37.5MB of cache and 8 GT/s QuickPath connectivity. The lowest-power models in the list have 105W TDPs and run at 2.3GHz with 24MB of cache and 7.2 GT/s of QuickPath bandwidth. There was also talk of 40W, 1.4GHz models at ISSCC but they have not been announced yet.

Intel has signed on nearly two dozen hardware partners to support the platform, including Asus, Cisco, Dell, EMC, and Lenovo. On the software end, Microsoft, SAP, Teradata, Splunk, and Pivotal also already support the new Xeon family. IBM and Oracle are among the few that support Xeon E7 v2 on both sides of the spectrum.

Courtesy-Fud

 

Intel Makes Gains In The GPU Arena

February 21, 2014 by Michael  
Filed under Computing

GPU shipments in the fourth quarter of 2013 were in the green. Shipments were up 2 percent year-on-year and 1.6 percent sequentially. However, AMD did not have a stellar quarter. According to Jon Peddie Research, AMD’s overall unit shipments were down 10.4 percent last quarter. Intel gained 5.1 percent, while Nvidia was up 3.4 percent.

The attach rate was 137 percent and 34 percent of all PC’s sold in Q4 featured discrete graphics, while 66 percent relied solely on embedded graphics. The research firm pointed out that the overall PC market grew 1.8 percent quarter-on-quarter, but it was still down 8.5 percent compared to a year ago.

“The one bright spot in the PC market has been the growth of gaming PCs where discrete GPUs play a significant role. The CAGR for total PC graphics from 2013 to 2017 is -1.3% in 2013, 446 million GPUs were shipped and the forecast for 2017 is 422 million,” Jon Peddie Research said.

AMD’s shipments of desktop APUs were up 15 percent sequentially, but they dropped 26.7 percent in notebooks. AMD’s discrete desktop shipments increased 1.8 percent, while discrete notebook shipments were down 6.7 percent. Overall AMD’s PC graphics shipments were down 10.4 percent.

“Notebook build cycles are specific, and AMD was late with its new parts,” the researchers pointed out.

Nvidia’s desktop shipments were up 3.6 percent quarter-on-quarter and its notebook discrete shipments increased 3.2 percent. Overall Nvidia’s PC GPU shipments were up 3.4 percent.

Courtesy-Fud

 

Is AMD’s Mantle Good For Games?

February 20, 2014 by Michael  
Filed under Gaming

Oxide Games’ Dan Baker is getting all excited about Mantle in the upcoming game Star Swarm. He told Maximum PC that Mantle isn’t just a low-level API that’s close to the metal. But when compared to DirectX, Mantle is lower in the overall software stack.

Baker said that Mantle still abstracts the details of the shader cores themselves, so that it is not clear if it is running on a vector machine or a scalar machine. However, what isn’t abstracted is the basic way a GPU operates, he said. The GPU is another processor, just like any other, that reads and writes memory. One thing that has happened is that GPUs are now general in terms of functionality. They can read memory anywhere. They can write memory anywhere.”

Mantle puts the responsibility onto the developer. Some feel that is too much, but this really is not any different from managing multiple CPUs on a system, which Oxide have gotten good at. Oxide does not program multiple CPUs with an API, it just does it itself. Mantle gives us a similar capability for the GPU, he said. When asked about the performance in Star Swarm, Baker indicated that the performance will depend on how exploitative you are, and the specifics of the engine. In the case of Star Swarm, the team was limited in what they could do by driver overhead problems. There have been decisions made where the team traded GPU performance for CPU.

Baker said that the Direct3D performance for the game absolutely outstanding. We have spent a huge amount of time optimising around D3D, and are biased in D3D’s favor. “Mantle, on the other hand, we’ve spent far less time with and currently have only pretty basic optimizations. But Mantle is such an elegant API that it still dwarfs our D3D performance,” Baker said.

Courtesy-Fud

Ubuntu Cross-Platform Convergence Delayed

February 17, 2014 by Michael  
Filed under Computing

Ubuntu will not offer cross-platform apps as soon as it had hoped.

Canonical had raised hopes that its plan for Ubuntu to span PCs and mobile devices would be realised with the upcoming Ubuntu 14.04 release, providing a write-once, run-on-many template similar to that planned by Google for its Chrome OS and Android app convergence.

This is already possible on paper and the infrastructure is in place on smartphone and tablet versions of Ubuntu through its new Unity 8 user interface.

However, Canonical has decided to postpone the rollout of Unity 8 for desktop machines, citing security concerns, and it will now not appear along with the Mir display server this coming autumn.

This will apply only to apps in the Ubuntu store, and in the true spirit of open source, anyone choosing to step outside that ecosystem will be able to test the converged Ubuntu before then.

Ubuntu community manager Jono Bacon told Ars Technica, “We don’t plan on shipping apps in the new converged store on the desktop until Unity 8 and Mir lands.

“The reason is that we use app insulation to (a) run apps securely and (b) not require manual reviews (so we can speed up the time to get apps in the store). With our plan to move to Mir, our app insulation doesn’t currently insulate against X apps sniffing events in other X apps. As such, while Ubuntu SDK apps in click packages will run on today’s Unity 7 desktop, we don’t want to make them readily available to users until we ship Mir and have this final security consideration in place.

“Now, if a core-dev or motu wants to manually review an Ubuntu SDK app and ship it in the normal main/universe archives, the security concern is then taken care of with a manual review, but we are not recommending this workflow due to the strain of manual reviews.”

As well as the aforementioned security issues, there are still concerns that cross-platform apps don’t look quite as good on the desktop as native desktop versions and the intervening six months will be used to polish the user experience.

Getting the holistic experience right is essential for Ubuntu in order to attract OEMs to the converged operating system. Attempts to crowdfund its own Ubuntu handset fell short of its ambitious $20m target, despite raising $10.2 million, the single largest crowdfunding total to date.

Courtesy-TheInq

 

Samsung Joins OpenPower Consortium

February 14, 2014 by Michael  
Filed under Around The Net

Samsung has joined Google, Mellanox, Nvidia and other tech companies as part of IBM’s OpenPower Consortium. The OpenPower Consortium is working toward giving developers access to an expanded and open set of server technologies to improve data centre hardware using chip designs based on the IBM Power architecture.

Last summer, IBM announced the formation of the consortium, following its decision to license the Power architecture. The OpenPower Foundation, the actual entity behind the consortium, opened up the Power architecture technology, including specs, firmware and software under a license. Firmware is offered as open source. Originally, OpenPower was the brand of a range of System p servers from IBM that utilized the Power5 CPU. Samsung’s products currently utilize both x86 and ARM-based processors.

The intention of the consortium is to develop advanced servers, networking, storage and GPU-acceleration technology for new products. The four priority technical areas for development are system software, application software, open server development platform and hardware architecture. Along with its announcement of Samsung’s membership, the organization said that Gordon MacKean, Google’s engineering director of the platforms group, will now become chairman of the group. Nvidia has said it will use its graphics processors on Power-based hardware, and Tyan will be releasing a Power-based server, the first one outside IBM.

Courtesy-Fud