Subscribe to:

Subscribe to :: TheGuruReview.net ::

Chat Tool Slack Admits To Being Hacked

March 31, 2015 by mphillips  
Filed under Around The Net

The popular group chat tool Slack had its central database hacked in February, according to the company, potentially compromising users’ profile information like log-on data, email addresses and phone numbers.

The database also holds any additional information users may have added to their profiles like their Skype IDs.

The passwords were encrypted using a hashing technique. There was no indication the hackers were able to decrypt the passwords, Slack Technologies said in a blog post. No financial or payment information was accessed or compromised, it said.

The unauthorized access took place over about four days in February. The company said it has made changes to its infrastructure to prevent future incidents.

Slack was contacting a “very small number” of individual users who had suspicious activity tied to their accounts, or whose messages may have been accessed. Slack did not say how many users it thinks may have been affected in this way. A company spokeswoman declined to comment further.

There’s been strong interest in Slack’s business chat app since it launched last year, and its user base now tops 500,000.

To beef up security, Slack added a two-factor authentication feature on Friday. If it’s enabled, users must enter a verification code in addition to their normal password whenever they sign in to Slack. The company recommends that all users turn it on.

Slack has also released a password kill-switch feature, to let team owners and administrators reset passwords for an entire team at once. Barring that, users can reset their passwords in their profile settings.

 

 

Amazon Unleashes Unlimited Storage For $5 A Month

March 30, 2015 by mphillips  
Filed under Around The Net

Amazon upped the ante: Unlimited cloud storage for individuals for $5 a month ($59.99 per year).

Amazon’s Unlimited Everything Plan allows users to store an infinite number of  photos, videos, files, documents, movies and music in its Cloud Drive.

The site also announced a separate $12 per year plan for unlimited photos. People who subscribe to Amazon Prime already get unlimited capacity for photos. Both the Unlimited Everything Plan and the Photos Plan have three-month free trial periods.

Online storage and file sharing service providers, such as Google Drive, Dropbox, and iCloud, have been engaged in a pricing war over the past year. Last fall, Dropbox dropped its Pro plan pricing for individuals to $9.99 per month for 1TB of capacity. Dropbox offers 2GB of capacity for free.

Dropbox also offers members 500MB of storage each time they get a friend to sign up; there’s a 16GB max on referrals, though. With Dropbox Pro, members can get 1GB instead of 500MB each time they refer someone.

Google Drive offers 15GB of capacity for free and charges $1.99 per month for 100GB and $9.99 per month for 1TB.

Apple’s iCloud offers 5GB of capacity for free, and charges 99 cents per month for 20GB, $3.99 per month for 200GB and $9.99 per month for 1TB.

Microsoft’s OneDrive offers 15GB of capacity for free, and charges $1.99 per month for 100GB, $3.99 per month for 200GB and $6.99 per month for 1TB.

While Amazon offers unlimited file size uploads for desktop users, it limits file sizes to 2GB for mobile devices.

 

 

Will Intel Challenge nVidia In The GPU Space?

March 30, 2015 by Michael  
Filed under Computing

Intel has released details of its next -generation Xeon Phi processor and it is starting to look like Intel is gunning for a chunk of Nvidia’s GPU market.

According to a briefing from Avinash Sodani, Knights Landing Chief Architect at Intel, a product update by Hugo Saleh, Marketing Director of Intel’s Technical Computing Group, an interactive technical Q&A and a lab demo of a Knights Landing system running on an Intel reference-design system, Nvidia could be Intel’s target.

Knights Landing and prior Phi products are leagues apart and more flexible for a wider range of uses. Unlike more specialized processors, Intel describes Knights Landing as taking a “holistic approach” to new breakthrough applications.

The current generation Phi design, which operates as a coprocessor, Knights Landing incorporates x86 cores and can directly boot and run standard operating systems and application code without recompilation.

The test system had socketed CPU and memory modules was running a stock Linux distribution. A modified version of the Atom Silvermont x86 cores formed a Knights Landing ’tile’ which was the chip’s basic design unit consisting of dual x86 and vector execution units alongside cache memory and intra-tile mesh communication circuitry.

Each multi-chip package includes a processor with 30 or more tiles and eight high-speed memory chips.

Intel said the on-package memory, totaling 16GB, is made by Micron with custom I/O circuitry and might be a variant of Micron’s announced, but not yet shipping Hybrid Memory Cube.

The high-speed memory is similar to the DDR5 devices used on GPUs like Nvidia’s Tesla.

It looks like Intel saw that Nvidia was making great leaps into the high performance arena with its GPU and thought “I’ll be having some of that.”

The internals of a GPU and Xeon Phi are different, but share common ideas.

Nvidia has a big head start. It has already announced the price and availability of a Titan X development box designed for researchers exploring GPU applications to deep learning. Intel has not done that yet for Knights Landing systems.

But Phi is also a hybrid that includes dozens of full-fledged 64-bit x86 cores. This could make it better at some parallelizable application categories that use vector calculations.

Courtesy-Fud

Are Free-To-Play Games Still In Its Infancy

March 30, 2015 by Michael  
Filed under Gaming

During a presentation at the Game Developers Conference earlier this month, Boss Fight Entertainment’s Damion Schubert suggested the industry to drop the term “whales,” calling it disrespectful to the heavy spenders that make the free-to-play business model possible. As an alternative, he proposed calling them “patrons,” as their largesse allows the masses to enjoy these works that otherwise could not be made and maintained.

After his talk, Schubert spoke with GamesIndustry.biz about his own experiences with heavy spending customers. During his stint at BioWare Austin, Schubert was a lead designer on Star Wars: The Old Republic as it transitioned from its original subscription-based business model to a free-to-play format.

“I think the issue with whales is that most developers don’t actually psychologically get into the head of whales,” Schubert said. “And as a result, they don’t actually empathize with those players, because most developers aren’t the kind of person that would shell out $30,000 to get a cool speeder bike or whatnot… I think your average developer feels way more empathy for the free players and the light spenders than the whales because the whales are kind of exotic creatures if you think about them. They’re really unusual.”

Schubert said whales, at least those he saw on The Old Republic, don’t have uniform behavior patterns. They weren’t necessarily heavy raiders, or big into player-vs-player competition. They were just a different class of customer, with the only common attribute being that they apparently liked to spend money. Some free-to-play games have producers whose entire job is to try to understand those customers, Schubert said, setting up special message boards for that sub-community of player, or letting them vote on what content should be added to a game next.

“When you start working with these [customers], there’s a lot of concern that they are people who have gambling problems, or kids who have no idea of the concept of money,” Schubert said.

But from his experience on The Old Republic, Schubert came to understand that most of that heavy spending population is simply people who are legitimately rich and don’t have a problem with devoting money to something they see as a hobby. Schubert said The Old Republic team was particular mindful of free-to-play abuse, and had spending limits placed to protect people from credit card fraud or kids racking up unauthorized charges. If someone wanted to be a heavy spender on the game, they had to call up customer service and specifically ask for those limits to be removed.

“If you think about it, they wanted to spend money so much that they were willing to endure what was probably a really annoying customer service call so they could spend money,” Schubert said.

The Old Republic’s transition from a subscription-based model to free-to-play followed a wider shift in the massively multiplayer online genre. Schubert expects many of the traditional PC and console gaming genres like fighting games and first-person shooters to follow suit, one at a time. That said, free-to-play is not the business model of the future. Not the only one, at least.

“I think the only constant in the industry is change,” Schubert said when asked if the current free-to-play model will eventually fall out of favor. “So yeah, it will shift. And it will always shift because people find a more effective billing model. And the thing to keep in mind is that a more effective billing model will come from customers finding something they like better… I think there is always someone waiting in the wings with a new way of how you monetize it. But I do think that anything we’re going to see in the short term, at least, is probably going to start with a great free experience. It’s just so hard to catch fire; there are too many competitive options that are free right now.”

Two upstart business models Schubert is not yet sold on are crowdfunding and alpha-funding. As a consumer, he has reservations about both.

“The Wild West right now is the Kickstarter stuff, which is a whole bunch of companies that are making their best guess about what they can do,” Schubert said. “Many of them are doing it very, very poorly, because it turns out project management in games is something the big boys don’t do very well, much less these guys making their first game and trying to do it on a shoestring budget. I think that’s a place where there’s a lot more caveat emptor going on.”

Schubert’s golden rule for anyone thinking of supporting a Kickstarter is to only pledge an amount of money you would be OK losing forever with nothing to show for it.

“At the end of the day, you’re investing on a hope and a dream, and by definition, a lot of those are just going to fail or stall,” Schubert said. “Game development is by definition R&D. Every single game that gets developed is trying to find a core game loop, trying to find the magic, trying to find the thing that will make it stand out from the 100 other games that are in that same genre. And a lot of them fail. You’ve played 1,000 crappy games. Teams didn’t get out to make crappy games; they just got there and they couldn’t find the ‘there’ there.”

He wasn’t much kinder to the idea of charging people for games still in an early stage of development.

“I’m not a huge fan of Early Access, although ironically, I think the MMO genre invented it,” Schubert said. “But on the MMOs, we needed it because there are things on an MMO that you cannot test without a population. You cannot test a 40-man raid internally. You cannot test large-scale political systems. You cannot test login servers with real problems from different countries, server load and things like that. Early Access actually started in my opinion, with MMOs, with the brightest of hopes and completely and totally clean ideals.”

Schubert has funded a few projects in Early Access, but said he wound up getting unfinished games in return. Considering he works on unfinished games for a living, he doesn’t have much patience for them in his spare time, and has since refrained from supporting games in Early Access.

“I genuinely think there are very few people in either Kickstarter or Early Access that are trying to screw customers,” Schubert said. “I think people in both those spaces are doing it because they love games and want to be part of it, and it’s hard for me to find fault in that at the end of the day.”

Courtesy-GI.biz

Microsoft Confirms Windows 10 Will Support 8K Resolution

March 27, 2015 by Michael  
Filed under Computing

Software King of the World Microsoft’s Windows 10 operating system will support screen resolutions that will not be available on commercial displays for years.

At the WinHEC conference Microsoft revealed that Windows 10 will support 8K (7680*4320) resolution for monitors, which is unlikely show up on the market this year or next.

It also showed off minimum and maximum resolutions supported by its upcoming Windows 10. It looks like the new operating system will support 6″+ phone and tablet screens with up to 4K (3840*2160) resolution, 8″+ PC displays with up to 4K resolution and 27″+ monitors with 8K (7680*4320) resolution.

To put this in some perspective, the boffins at the NHK (Nippon H?s? Ky?kai, Japan Broadcasting Corp.) think that 8K ultra-high-definition television format will be the last 2D format as the 7680*4320 resolution (and similar resolution) is the highest 2D resolution that the human eye can process.

This means that 8K and similar resolutions will stay around for a long time and it makes sense to add their support to hardware and software.

NHK is already testing broadcasting in 8K ultra-high-definition resolutions, VESA has ratified DisplayPort and embedded DisplayPort standards to connect monitors with up to 8K resolution to graphics adapters and a number of upcoming games will be equipped for textures for 8K UHD displays.

However monitors that support 8K will not be around for some time because display makers will have to produce new types of panels for them.

Redmond will be ready for the advanced UHD monitors well before they hit the market. Many have criticized Microsoft for poor support of 4K UHD resolutions in Windows 8.

Courtesy-Fud

 

Pwn2Own Researchers Able To Hack All Four Browsers

March 23, 2015 by mphillips  
Filed under Computing

Security researchers who participated in the Pwn2Own hacking contest have demonstrated remote code execution exploits against the top four browsers, and also hacked the widely used Adobe Reader and Flash Player plug-ins.

South Korean security researcher and serial browser hacker Jung Hoon Lee, known online as lokihardt, single-handedly popped Internet Explorer 11 and Google Chrome on Microsoft Windows, as well as Apple Safari on Mac OS X.

He walked away with US$225,000 in prize money, not including the value of the brand new laptops on which the exploits are demonstrated and which the winners get to take home.

The Pwn2Own contest takes place every year at the CanSecWest security conference in Vancouver, Canada, and is sponsored by Hewlett-Packard’s Zero Day Initiative program. The contest pits researchers against the latest 64-bit versions of the top four browsers in order to demonstrate Web-based attacks that can execute rogue code on underlying systems.

Lee’s attack against Google Chrome earned him the largest payout for a single exploit in the history of the competition: $75,000 for the Chrome bug, an extra $25,000 for a privilege escalation to SYSTEM and another $10,000 for also hitting the browser’s beta version — for a total of $110,000.

The IE11 exploit earned him an additional $65,000 and the Safari hack $50,000.

Lee’s accomplishment is particularly impressive because he competed alone, unlike other researchers who teamed up, HP’s security research team said in a blog post.

Also on Thursday, a researcher who uses the hacker handle ilxu1a popped Mozilla Firefox on Windows for a $15,000 prize. He also attempted a Chrome exploit, but ran out of time before he managed to get his attack code working.

Mozilla Firefox was also hacked, during the first day of the competition, by a researcher named Mariusz Mlynski. His exploit also leveraged a Windows flaw to gain SYSTEM privileges, earning him a $25,000 bonus on top of the standard $30,000 payout for the Firefox hack.

Most of the attacks demonstrated at Pwn2Own this year required chaining of several vulnerabilities together in order to bypass all defense mechanisms put in place in operating systems and browsers to prevent remote code execution.

The final count for vulnerabilities exploited this year stands as follows: five flaws in the Windows OS, four in Internet Explorer 11, three each in Mozilla Firefox, Adobe Reader, and Flash Player, two in Apple Safari and one in Google Chrome.

 

 

Will Microsoft Dump It’s XBOX Division?

March 23, 2015 by Michael  
Filed under Gaming

Microsoft’s Xbox division is in a much healthier state today than it was a year ago. It’s had a tough time of it; forced to reinvent itself in an excruciating, public way as the original design philosophy and marketing message for the Xbox One transpired to be about as popular as breaking wind in a crowded lift, resulting in executive reshuffles and a tricky refocus of the variety that would ordinarily be carried out pre-launch and behind closed doors. Even now, Xbox One remains lumbered with the fossilised detritus of its abortive original vision; Kinect 2.0 has been shed, freeing up system resources and marking a clear departure for the console, but other legacy items like the expensive hardware required for HDMI input and TV processing are stuck right there in the system’s hardware and cannot be extracted until the inevitable redesign of the box rolls around.

All the same, under Phil Spencer’s tenure as Xbox boss, the console has achieved a better turnaround than any of us would have dared to expect – but that, perhaps, speaks to the low expectations everyone had. In truth, despite the sterling efforts of Spencer and his team, Xbox One is still a console in trouble. A great holiday sales season was widely reported, but actually only happened in one territory (the USA, home turf that was utterly dominated by Xbox in the previous generation), was largely predicated on a temporary price-cut and was somewhat marred by serious technical issues that dogged the console’s headline title for the season, the Master Chief Collection.

Since the start of 2015, things have settled down to a more familiar pattern once more; PS4 consistently outsells Xbox One, even in the USA, generally racking up more than double the sales of its competitor in global terms. Xbox One sells better month-on-month than the Wii U, but that’s cold comfort indeed given that Nintendo’s console is widely seen as an outright commercial failure, and Nintendo has all but confirmed that it will receive an early bath, with a replacement in the form of Nintendo NX set to be announced in 2016. Microsoft isn’t anywhere near that level of crisis, but nor are its sales in 2015 thus far outside the realms of comparison with Wii U – and their installed bases are nigh-on identical.

The odd thing about all of this, and the really positive thing that Microsoft and its collaborators like to focus on, is that while the Xbox One looks like it’s struggling, it’s actually doing markedly better than the Xbox 360 was at the same point in its lifespan – by my rough calculations, Xbox One is about 2.5 million units north of the installed base of Xbox 360 at the same point. Oddly, that makes it more comparable with PS3, which was, in spite of its controversy-dogged early years, a much faster seller out the door than Microsoft’s console. The point stands, though, that in simple commercial terms Xbox One is doing better than Xbox 360 did – it just happens that PS4 is doing better than any console has ever done, and casting a long shadow over Microsoft’s efforts in the process.

The problem with this is that I don’t think very many people are under the impression that Microsoft, whose primary businesses lie in the sale of office and enterprise software, cloud services and operating systems, is in the videogames business just in order to turn a little profit. Ever since the departure of Steve Ballmer and the appointment of the much more business-focused Satya Nadella as CEO, Xbox has looked increasingly out of place at Microsoft, especially as projects like Surface and Windows Phone have been de-emphasised. If Xbox still has an important role, it’s as the flag-bearer for Microsoft’s brand in the consumer space; but even at that, the “beach-head in the living room” is far less important now that Sony no longer really looks like a competitor to Microsoft, the two companies having streamlined themselves to a point where they don’t really focus on the same things any more. Besides, Xbox One is being left behind in PS4′s dust; even if Microsoft felt like it needed a beach-head in the living room, Xbox wouldn’t exactly be doing the job any more.

But wait, we’ve been here before, right? All those rumours about Microsoft talking to Amazon about unloading the Xbox division came to nothing only a few short months ago, after all. GDC saw all manner of talk about Xbox One’s place in the Windows 10 ecosystem; Spencer repeatedly mentioned the division having Nadella’s backing, and then there’s the recent acquisition of Minecraft, which surely seems like an odd thing to take place if the position of Xbox within the Microsoft family is still up in the air. Isn’t this all settled now?

Perhaps not, because the rumours just won’t stop swirling that Microsoft had quietly put Xbox on the market and is actively hunting for a buyer. During GDC and ever since, the question of who will come to own Xbox has been posed and speculated upon endlessly. The console’s interactions with Windows 10, including the eventual transition of its own internal OS to the Windows 10 kernel; the supposed backing of Nadella; the acquisition of Minecraft; none of these things have really deterred the talk that Microsoft doesn’t see Xbox as a core part of its business any more and would be happy to see it gone. The peculiar shake-up of the firm’s executive team recently, with Phil Harrison quietly departing and Kudo Tsunoda stepping up to share management of some of Microsoft Game Studios’ teams with Phil Spencer, has added fuel to the fire; if you hold it up at a certain angle to the light, this decision could look like it’s creating an internal dividing line that would make a possible divestment easier.

Could it happen? Well, yes, it could – if Microsoft is really determined to sell Xbox and can find a suitable bidder, it could all go far more smoothly than you may imagine. Xbox One would continue to be a part of the Windows 10 vision to some extent, and would probably get its upgrade to the Windows 10 kernel as well, but would no longer be Microsoft hardware – not an unfamiliar situation for a company whose existence has mostly been predicated on selling operating systems for other people’s hardware. Nobody would buy Xbox without getting Halo, Forza and various other titles into the bargain, but Microsoft’s newly rediscovered enthusiasm for Windows gaming would suggest a complex deal wherein certain franchises (probably including Minecraft) remain with Microsoft, while others went off with the Xbox division. HoloLens would remain a Microsoft project; it’s not an Xbox project right now and has never really been pushed as an Xbox One add-on, despite the immediate comparisons it prompted with Sony’s Morpheus. Xbox games would still keep working with the Azure cloud services (Microsoft will happily sell access to that to anyone, on any platform), on which framework Xbox Live would continue to operate. So yes, Xbox could be divorced from Microsoft, maintaining a close and amiable relationship with the requisite parts of the company while taking up residence in another firm’s stable – a firm with a business that’s much more in line with the objectives of Xbox than Microsoft now finds itself to be.

“None of Xbox’ rivals would be in the market to buy such a large division, and no game company would wish to lumber itself with a platform holder business. Neither Apple nor Google make the slightest sense as a new home for Xbox either”

This, I think, is the stumbling block. I’m actually quite convinced that Microsoft would like to sell the Xbox division and has held exploratory talks to that end; I’m somewhat less convinced, but prepared to believe, that those talks are continuing even now. However, I’m struggling to imagine a buyer. None of Xbox’ rivals would be in the market to buy such a large division, and no game company would wish to lumber itself with a platform holder business. Neither Apple nor Google make the slightest sense as a new home for Xbox either; the whole product is distinctly “un-Apple” in its ethos and approach, while Google is broadly wary of hardware and almost entirely disinterested in games.

Amazon was the previously mentioned suitor, and to my mind, remains the most likely purchaser – but it’s seemingly decided to pursue its own strategy for living room devices for now, albeit with quite limited success. I could see Amazon still “exploring options” in this regard with Microsoft, but if that deal was going to happen, I would have expected it to happen last year. Who else is out there, then? Netflix, perhaps, is an interesting outside possibility – the company’s branching out into creating original TV content as well as being a platform for third-party content would be a reasonably good cultural match for the Game Studios aspect of Xbox, but it’s hard to imagine a company that has worked so hard to divorce itself from the entire physical product market suddenly leaping back into it with a large, expensive piece of hardware.

This, I think, is what ultimately convinces me that Xbox is staying at Microsoft – for better or worse. It might be much better for Xbox if it was a centrepiece project for a company whose business objectives matched its strengths; but I don’t think any such company exists to take the division off Microsoft’s hands. Instead, Spencer and his talented team will have to fight to ensure that Xbox remains relevant and important within Microsoft. Building its recognition as a Windows 10 platform is a good start; figuring out other ways in which Xbox can continue to be a great game platform while also bringing value to the other things that Microsoft does is the next challenge. Having turned around public perception of the console to a remarkable degree, the next big task for the Xbox team will be to change perceptions within Microsoft itself and within the investor community – if Xbox is stuck at Microsoft for the long haul, it needs to carve itself a new niche within a business vision that isn’t really about the living room any more.

Courtesy-GI.biz

Microsoft Looking To Replace IE Browser Name

March 19, 2015 by mphillips  
Filed under Around The Net

Microsoft has confirmed that the new default browser in Windows 10 will not be named “Internet Explorer,” essentially marking the end of a 20-year reign by the once omnipresent product.

A new name was not disclosed, however.

“We’re right now researching what the new brand, or the new name, for our browser should be in Windows 10,” said Chris Capossela, Microsoft’s chief marketing officer, during a discussion of branding Monday at the firm’s Convergence conference. “We’ll continue to have Internet Explorer, but we’ll also have a new browser called Project Spartan, which is codenamed Project Spartan. And we have to name the thing.”

Microsoft has talked about Spartan before: In January, when the company touted Windows 10′s consumer-oriented features, it officially announced the new browser, dubbing it with the code name. Spartan, executives said then, would be the default Web browser for the new OS, although Internet Explorer will also be bundled with Windows 10, primarily for enterprise legacy requirements.

The clear implication was that Spartan would be tagged with a name different than “Internet Explorer,” or its shorthand, “IE.”

Capossela made that plain Monday when he talked about working up a new moniker.

According to people familiar with Microsoft’s plans, it will not reveal Spartan’s name until May, most likely at Ignite, the conference slated to run May 4-8 in Chicago. Ignite will roll up TechEd with several older, often-smaller meetings, including those that specialized in Exchange and SharePoint.

 

 

Will TSMC Win Apple’s A9 Business?

March 18, 2015 by Michael  
Filed under Computing

TSMC is reportedly getting the majority of Apple A9 orders, which would be a big coup for the company.

An Asian brokerage firm released a research note, claiming that disputes over the number of Apple A9 orders from TSMC and Samsung are “coming to an end.”

The unnamed brokerage firm said TSMC will gain more orders due to its superior yield-ramp and “manufacturing excellence in mass-production.”

This is not all, as the firm also claims TSMC managed to land orders for all Apple A9X chipsets, which will power next generation iPads. With the A9X, TSMC is expected to supply about 70 percent of all Apple A9-series chips, reports Focus Taiwan.

While Samsung managed to beat other mobile chipmakers (and TSMC), and roll out the first SoC manufactured on a FinFET node, TSMC is still in the game. The company is already churning out 16nm Kirin 930 processors for Huawei, and it’s about to get a sizable chunk of Apple’s business.

TSMC should have no trouble securing more customers for its 16FF process, which will be supplemented by the superior 16FF+ process soon. In addition, TSMC is almost certain to get a lot of business from Nvidia and AMD once their FinFET GPUs are ready.

Courtesy-Fud

Is Valve’s Steam Machine A Flop?

March 17, 2015 by Michael  
Filed under Gaming

There’s not a lot to argue with the consensus view that Valve had the biggest and most exciting announcement of GDC this year, in the form of the Vive VR headset it’s producing with hardware partner HTC. It may not be the ultimate “winner” of the battle between VR technologies, but it’s done more than most to push the whole field forwards – and it clearly sparked the imaginations of both developers and media in San Francisco earlier this month. Few of those who attended GDC seem particularly keen to talk about anything other than Vive.

From Valve’s perspective, that might be just as well – the incredibly strong buzz around Vive meant that it eclipsed Valve’s other hardware-related announcement at GDC, the unveiling of new details of the Steam Machines initiative. Ordinarily, it might be an annoying (albeit very high-quality) problem to have one of your announcements completely dampen enthusiasm for the other; in this instance, it’s probably welcome, because what trickled out of GDC regarding Steam Machines is making this look like a very stunted, unloved and disappointing project indeed.

To recap briefly; Steam Machines is Valve’s attempt to create a range of attractive, small-form-factor PC hardware from top manufacturers carrying Valve’s seal of approval (hence being called “Steam Machines” and quite distinctly not “PCs”), running Valve’s own gaming-friendly flavour of the Linux OS, set up to connect to your living room TV and controlled with Valve’s custom joypad device. From a consumer standpoint, they’re Steam consoles; a way to access the enormous library of Steam content (at least the Linux-friendly parts of it) through a device that’s easy to buy, set up and control, and designed from the ground up for the living room.

That’s a really great idea, but one which requires careful execution. Most of all, if it’s going to work, it needs a fairly careful degree of control; Valve isn’t building the machines itself, but since it’s putting its seal of approval on them (allowing them to use the Steam trademark and promoting them through the Steam service), it ought to have the power to enforce various standards related to specification and performance, ensuring that buyers of Steam Machines get a clear, simple, transparent way to understand the calibre of machine they’re purchasing and the gaming performance they can expect as a result.

Since the announcement of the Steam Machines initiative, various ways of implementing this have been imagined; perhaps a numeric score assigned to each Machine allowing buyers to easily understand the price to performance ratio on offer? Perhaps a few distinct “levels” of Steam Machine, with some wiggle room for manufacturers to distinguish themselves, but essentially giving buyers a “Good – Better – Best” set of options that can be followed easily? Any such rating system could be tied in to the Steam store itself, so you could easily cross-reference and find out which system is most appropriate for the kind of games you actually want to play.

In the final analysis, it would appear that Valve’s decision on the myriad possibilities available to it in this regard is the worst possible cop-out, from a consumer standpoint; the company’s decided to do absolutely none of them. The Steam Machines page launched on the Steam website during GDC lists 15 manufacturers building the boxes; many of those manufacturers are offering three models or more at different price and performance levels. There is absolutely no way to compare or even understand performance across the different Steam Machines on offer, short of cross-referencing the graphics cards, processors, memory types and capacities and drive types and capacities used in each one – and if you’ve got the up-to-date technical knowledge to accurately balance those specifications across a few dozen different machines and figure out which one is the best, then you’re quite blatantly going to be the sort of person who saves money by buying the components separately and wouldn’t buy a Steam Machine in a lifetime.

“Valve seems to have copped out entirely from the idea of using its new systems to make the process of buying a gaming PC easier or more welcoming for consumers”

In short, unless there’s a pretty big rabbit that’s going to be pulled out of a hat between now and the launch of the first Steam Machines in the autumn, Valve seems to have copped out entirely from the idea of using its new systems to make the process of buying a gaming PC easier or more welcoming for consumers – and in the process, appears to have removed pretty much the entire raison d’etre of Steam Machines. The opportunity for the PC market to be grown significantly by becoming more “console-like” isn’t to do with shoving PC components into smaller boxes; that’s been happening for years, occasionally with pretty impressive results. Nor is it necessarily about reducing the price, which has also been happening for some years (and which was never going to happen with Steam Machines anyway, as Valve is of no mind to step in and become a loss-leading platform holder).

Rather, it’s about lowering the bar to entry, which remains dizzyingly high for PC gaming – not financially, but in knowledge terms. A combination of relatively high-end technical knowledge and of deliberate and cynical marketing-led obfuscation of technical terminology and product numbering has meant that the actual process of figuring out what you need to buy in order to play the games you want at a degree of quality that’s acceptable is no mean feat for an outsider wanting to engage (or re-engage) with PC games; it’s in this area, the simplicity and confidence of buying a system that you know will play all the games marketed for it, that consoles have an enormous advantage over the daunting task of becoming a PC gamer.

Lacking any guarantee of performance or simple way of understanding what sort of system you’re buying, the Steam Machines as they stand don’t do anything to make that process easier. Personally, I ought to be slap bang in the middle of the market for a Steam Machine; I’m a lapsed PC gamer with a decent disposable income who is really keen to engage with some of the games coming out in the coming year (especially some of the Kickstarted titles which hark back to RPGs I used to absolutely adore), but I’m totally out of touch with what the various specifications and numbers mean. A Steam Machine that I could buy with the confidence that it would play the games I want at decent quality would be a really easy purchase to justify; yet after an hour flicking over and back between the Steam Machines page launched during GDC and various tech websites (most of which assume a baseline of knowledge which, in my case, is a good seven or eight years out of date), I am no closer to understanding which machine I would need or what kind of price point is likely to be right for me. Balls to it; browser window full of tabs looking at tech spec mumbo-jumbo closed, PS4 booted up. Sale lost.

This would be merely a disappointment – a missed opportunity to lower the fence and let a lot more people enjoy PC gaming – were it not for the extra frisson of difficulty posed by none other than Valve’s more successful GDC announcement, the Vive VR headset. You see, one of the things that’s coming across really clearly from all the VR technology arriving on the market is that frame-rate – silky-smooth frame-rate, at least 60FPS and preferably more if the tech can manage it – is utterly vital to the VR experience, making the difference between a nauseating, headache-inducing mess and a Holodeck wet dream. Suddenly, the question of PC specifications has become even more important than before, because PCs incapable of delivering content of sufficient quality simply won’t work for VR. One of the appealing things about a Steam Machine ought to be the guarantee that I’ll be able to plug in a Vive headset and enjoy Valve’s VR, if not this year then at some point down the line; yet lacking any kind of certification that says “yes, this machine is going to be A-OK for VR experiences for now”, the risk of an expensive screw-up in the choice of machine to buy seems greater than ever before.

I may be giving Steam Machines a hard time unfairly; it may be that Valve is actually going to slap the manufacturers into line and impose a clear, transparent way of measuring and certifying performance on the devices, giving consumers confidence in their purchases and lowering the bar to entry to PC gaming. I hope so; this is something that only Valve is in a position to accomplish and that is more important than ever with VR on the horizon and approaching fast. The lack of any such system in the details announced thus far is bitterly disappointing, though. Without it, Steam Machines are nothing more than a handful of small form-factor PCs running a slightly off-kilter OS; of no interest to hobbyists, inaccessible to anyone else, and completely lacking a compelling reason to exist.

Courtesy-Gi.biz

Intel Shows Off The Xeon SoC

March 16, 2015 by Michael  
Filed under Computing

Intel has announced details of its first Xeon system on chip (SoC) which will become the new the Xeon D 1500 processor family.

Although it is being touted as a server, storage and compute applications chip at the “network edge”, word on the street is that it could be under the bonnet of robots during the next apocalypse.

The Xeon D SoCs use the more useful bits of the E3 and Atom SoCs along with 14nm Broadwell core architecture. The Xeon D chip is expected to bring 3.4x better performance per watt than previous Xeon chips.

Lisa Spelman, Intel’s general manager for the Data Centre Products Group, lifted the kimono on the eight-core 2GHz Xeon D 1540 and the four-core 2.2GHz Xeon D 1520, both running at 45W. It also features integrated I/O and networking to slot into microservers and appliances for networking and storage, the firm said.

The chips are also being touted for industrial automation and may see life powering robots on factory floors. Since simple robots can run on basic, low-power processors, there’s no reason why faster chips can’t be plugged into advanced robots for more complex tasks, according to Intel.

Courtesy-Fud

Can Linux Ever Succeed On The Desktop?

March 16, 2015 by Michael  
Filed under Computing

Every three years I install Linux and see if it is ready for prime time yet, and every three years I am disappointed. What is so disappointing is not so much that the operating system is bad, it has never been, it is just that who ever designs it refuses to think of the user.

To be clear I will lay out the same rider I have for my other three reviews. I am a Windows user, but that is not out of choice. One of the reasons I keep checking out Linux is the hope that it will have fixed the basic problems in the intervening years. Fortunately for Microsoft it never has.

This time my main computer had a serious outage caused by a dodgy Corsair (which is now a c word) power supply and I have been out of action for the last two weeks. In the mean time I had to run everything on a clapped out Fujitsu notebook which took 20 minutes to download a webpage.

One Ubuntu Linux install later it was behaving like a normal computer. This is where Linux has always been far better than Windows – making rubbish computers behave. I could settle down to work right? Well not really.

This is where Linux has consistently disqualified itself from prime-time every time I have used it. Going back through my reviews, I have been saying the same sort of stuff for years.

Coming from Windows 7, where a user with no learning curve can install and start work it is impossible. Ubuntu can’t. There is a ton of stuff you have to upload before you can get anything that passes for an ordinary service. This uploading is far too tricky for anyone who is used to Windows.

It is not helped by the Ubuntu Software Centre which is supposed to make like easier for you. Say that you need to download a flash player. Adobe has a flash player you can download for Ubuntu. Click on it and Ubuntu asks you if you want to open this file with the Ubuntu Software Center to install it. You would think you would want this right? Thing is is that pressing yes opens the software center but does not download Adobe flash player. The center then says it can’t find the software on your machine.

Here is the problem which I wrote about nearly nine years ago – you can’t download Flash or anything proprietary because that would mean contaminating your machine with something that is not Open Sauce.

Sure Ubuntu will download all those proprietary drivers, but you have to know to ask – an issue which has been around now for so long it is silly. The issue of proprietary drives is only a problem for those who are hard core open saucers and there are not enough numbers of them to keep an operating system in the dark ages for a decade. However, they have managed it.

I downloaded LibreOffice and all those other things needed to get a basic “windows experience” and discovered that all those typefaces you know and love are unavailable. They should have been in the proprietary pack but Ubuntu has a problem installing them. This means that I can’t share documents in any meaningful way with Windows users, because all my formatting is screwed.

LibreOffice is not bad, but it really is not Microsoft Word and anyone who tries to tell you otherwise is lying.

I download and configure Thunderbird for mail and for a few good days it actually worked. However yesterday it disappeared from the side bar and I can’t find it anywhere. I am restricted to webmail and I am really hating Microsoft’s outlook experience.

The only thing that is different between this review and the one I wrote three years ago is that there are now games which actually work thanks to Steam. I have not tried this out yet because I am too stressed with the work backlog caused by having to work on Linux without regular software, but there is an element feeling that Linux is at last moving to a point where it can be a little bit useful.

So what are the main problems that Linux refuses to address? Usability, interface and compatibility.

I know Ubuntu is famous for its shit interface, and Gnome is supposed to be better, but both look and feel dated. I also hate Windows 8′s interface which requires you to use all your computing power to navigate through a touch screen tablet screen when you have neither. It should have been an opportunity for Open saucers to trump Windows with a nice interface – it wasn’t.

You would think that all the brains in the Linux community could come up with a simple easy to use interface which lets you have access to all the files you need without much trouble. The problem here is that Linux fans like to tinker they don’t want usability and they don’t have problems with command screens. Ordinary users, particularly more recent generations will not go near a command screen.

Compatibly issues for games has been pretty much resolved, but other key software is missing and Linux operators do not seem keen to get them on board.

I do a lot of layout and graphics work. When you complain about not being able to use Photoshop, Linux fanboys proudly point to GIMP and say that does the same things. You want to grab them down the throat and stuff their heads down the loo and flush. GIMP does less than a tenth of what Photoshop can do and it does it very badly. There is nothing that can do what CS or any real desktop publishers can do available on Linux.

Proprietary software designed for real people using a desktop tends to trump anything open saucy, even if it is producing a technology marvel.

So in all these years, Linux has not attempted to fix any of the problems which have effectively crippled it as a desktop product.

I will look forward to next week when the new PC arrives and I will not need another Ubuntu desktop experience. Who knows maybe they will have sorted it in three years time again.

Courtesy-Fud

 

Microsoft’s Cortana Headed To Android, Apple Devices

March 16, 2015 by mphillips  
Filed under Mobile

Microsoft is developing a modified version of its competitor to Apple’s Siri, using research from an artificial intelligence project called “Einstein.”

Microsoft has been running its “personal assistant” Cortana on its Windows phones for a year, and will put the new version on the desktop with the arrival of Windows 10 this autumn. Later, Cortana will be available as a standalone app, usable on phones and tablets powered by Apple Inc’s iOS and Google Inc’s  Android, people familiar with the project said.

“This kind of technology, which can read and understand email, will play a central role in the next roll out of Cortana, which we are working on now for the fall time frame,” said Eric Horvitz, managing director of Microsoft Research and a part of the Einstein project, in an interview at the company’s Redmond, Washington, headquarters. Horvitz and Microsoft declined comment on any plan to take Cortana beyond Windows.

The plan to put Cortana on machines running software from rivals such as Apple andGoogle, as well as the Einstein project, have not been reported. Cortana is the name of an artificial intelligence character in the video game series “Halo.”

They represent a new front in CEO Satya Nadella’s battle to sell Microsoft software on any device or platform, rather than trying to force customers to use Windows. Success on rivals’ platforms could create new markets and greater relevance for the company best known for its decades-old operating system.

The concept of ‘artificial intelligence’ is broad, and mobile phones and computers already show dexterity with spoken language and sifting through emails for data, for instance.

Still, Microsoft believes its work on speech recognition, search and machine learning will let it transform its digital assistant into the first intelligent ‘agent’ which anticipates users needs. By comparison, Siri is advertised mostly as responding to requests. Google’s mobile app, which doesn’t have a name like Siri or Cortana, already offers some limited predictive information ‘cards’ based on what it thinks the user wants to know.

 

Did Microsoft’s Stuxnet Patch Work?

March 12, 2015 by Michael  
Filed under Computing

Microsoft’s Stuxnet patch did not work properly and has left users open to the vulnerability for five years.

Microsoft today is expected to release a security bulletin, MS15-020, patching the vulnerability (CVE-2015-0096). It is unknown whether there have been public exploits of patched machines. The original LNK patch was released Aug. 2, 2010.

The .LNK vulnerability was targeted by Stuxnet as it tried to take apart Iran’s nuclear program. German researcher Michael Heerklotz in January reported the new findings to HP’s Zero Day Initiative.

LNK files define shortcuts to files or directories; Windows allows them to use custom icons from control panel files (.CPL). In Windows, ZDI said, those icons are loaded from modules, either executables or DLLs; CPLs are DLLs. An attacker is able to then define which executable module would be loaded, and use the .LNK file to execute arbitrary code inside of the Windows shell.

Oddly the vulnerability does not seem to have been exploited in the wild, although the a Metasploit module has been available since 2010 and has been used in countless tests.

 

Courtesy-Fud

MediaTek To Go With AMD GPUs

March 11, 2015 by Michael  
Filed under Computing

One of the hottest things we learned at the Mobile World Congress is that MediaTek is working with AMD on mobile SoC graphics.

This is a big deal for both companies, as this means that AMD is getting back into the ultra-low power graphics market, while MediaTek might finally get faster graphics and gain more appeal in the high end segment. The choice of ARM Mali or Imaginations Technologies GPUs is available for anyone, but as most of you know Qualcomm has its own in-house Adreno graphics, while Nvidia uses ultra-low power Maxwell GPUs for its latest SoCs.

Since Nvidia exited the mobile phone business, it is now a two horse race between the ever dominant Qualcomm and fast growing MediaTek. The fact that MediaTek will get AMD graphics just adds fuel to the fire.

We have heard that key AMD graphics people are in continuous contact with MediaTek and that they have been working on an SoC graphics solution for a while.

MediaTek can definitely benefit from faster graphics, as the recently pictured tablet SoC MT8173 powered by two Cortex-A72 clocked up to 2.4GHz and two Cortex-A53 has PowerVR GX6250 graphics (two clusters). The most popular tablet chip Appel’s A8X has PowerVR Series 6XT GXA6850 (octa-core) which should end up significantly faster, but at the same time significantly more expensive.

MediaTek MT6795 a 28nm eight-core with a 2.2GHz clock and PowerVR G6200 GPU at 700 MHz, which is 100 MHz faster than one we tested on the Meizu MX4, which was one of the fastest SoCs until Qualcomm’s Snapdragon 810 came out in late February.

AMD and MediaTek declined to comment this upcoming partnership, but our industry sources know that they both have been working on new graphics for future chips that will be announced at a later date. It’s cool to see that AMD will return to this market, especially as the company sold of its Imageon graphics back in 2009 – for a lousy $65 million to Qualcomm. Imageon by ATI was the foundation for Adreno graphics.

We have been reassured some 18 months ago by some AMD senior graphics people, that “AMD didn’t forget how to make good ultra-low power graphics” and we guess that this cooperation proves that.

Courtesy-Fud