Subscribe to:

Subscribe to :: TheGuruReview.net ::

Scammers Trick iPhone Users Into Paying To Fix Non-existent Problem

March 30, 2017 by  
Filed under Mobile

Apple has fixed a bug in the iOS version of Safari that had been used by criminals to trick phone owners into paying $125 or more because they assumed the browser was broken.

The flaw, fixed in Monday’s iOS 10.3 update, had been reported to Apple a month ago by researchers at San Francisco-based mobile security firm Lookout.

“One of our users alerted us to this campaign, and said he had lost control of Safari on his iPhone,” Andrew Blaich, a Lookout security researcher, said in a Tuesday interview. “He said, ‘I can’t use my browser anymore.'”

The criminal campaign, Blaich and two colleagues reported in a Monday post to Lookout’s blog, exploited a bug in how Safari displayed JavaScript pop-ups. When the browser reached a malicious site implanted with the attack code, the browser went into an endless loop of dialogs that refused to close no matter who many times “OK” was tapped. The result: Safari was unusable.

At the same time, the attack showed a message, purportedly from a law enforcement agency, demanding payment to unlock the browser for, in one instance at least, simply steering to a URL that suggested the site’s content was pornographic. Payment was to be made by texting a £100 ($125) iTunes gift card code to a designated number.

Blaich stressed that the attack was as much scam as scare: To regain control of Safari, all one had to do was head to Settings, tap Safari, then Clear History and Website Data.

“This was a scareware attack, where [the attackers] were trying to get people to not think and just pay,” said Blaich.

Scareware is a label applied to phony security software that claims a computer is heavily infected with malware. Such software nags users with pervasive pop-ups and fake alerts until they fork over the “registration” fee, sometimes in the hundreds of dollars.

In iOS 10.3, Apple re-engineered Safari so that it handles JavaScript pop-ups on a per-tab basis. iOS 10.3 also patched 84 security vulnerabilities.

“[The hackers] hoped you would just react, want to cover it up, then pay and move on,” Blaich said.

U.S. Commerce Department Removes ZTE From Trade Blacklist

March 30, 2017 by  
Filed under Around The Net

The U.S. Department of Commerce has agreed to remove Chinese telecommunications equipment maker ZTE Corp  from a trade blacklist after the company pleaded guilty to violating sanctions on Iran and agreed to pay nearly $900 million, the agency said in a notice.

Removal from the list marks the end of a tense period for ZTE, which faced trade restrictions that could have severed its ties to critical U.S. suppliers.

“By acknowledging the mistakes we made, taking responsibility for them … we are committed to a ZTE that is fully compliant, healthy and trustworthy,” said ZTE Chief Executive Zhao Xianming said in an emailed statement.

Last year, the U.S. Commerce Department placed export restrictions on ZTE as punishment for violating U.S. sanctions against Iran. The restrictions would have prevented restricted suppliers from providing ZTE any U.S.-made equipment, potentially freezing the Chinese handset maker’s supply chain.

Over the past 12 months, as ZTE cooperated with U.S. authorities, the U.S. Commerce Department temporarily suspended the trade restrictions with a series of three-month reprieves, allowing the company to maintain ties to U.S. suppliers.

Earlier this month, ZTE agreed to pay a total of $892.4 million and pleaded guilty to violating U.S. sanctions by sending American-made technology to Iran and lying to investigators.

The Commerce Department said on Tuesday it would impose severe restrictions on former ZTE CEO Shi Lirong, whom the agency accused of approving efforts to skirt sanctions and ship equipment to Iran.

The Commerce Department said Shi approved a systematic, written business plan to use shell companies to secretly export U.S. technology to Iran. Reuters could not immediately reach Shi for comment.

The U.S. investigation followed reports by Reuters in 2012 that ZTE had signed contracts with Iran to ship millions of dollars’ worth of hardware and software from some of America’s best-known technology companies.

U.S. authorities have said the size of the financial penalty against ZTE also reflects the fact that the company lied to investigators when executives were approached about the allegations.

As part of the deal, ZTE will be under probation for three years and agreed to cooperate in the continuing investigation.

Can Violence In A Game Promote Safety?

March 30, 2017 by  
Filed under Gaming

When the original Doom was released in 1993, its unprecedentedly realistic graphic violence fueled a moral panic among parents and educators. Over time, the game’s sprite-based gore has lost a bit of its impact, and that previous sentence likely sounds absurd.

Given what games have depicted in the nearly quarter century since Doom, that level of violence no longer shocking so much as it is quaint, perhaps even endearing. So when it came time for id Software to reboot the series with last year’s critically acclaimed remake of Doom, one of the things the studio had to consider was exactly how violent it should be, and to what end.

Speaking with GamesIndustry.biz at the Game Developers Conference last month, the Doom reboot’s executive producer and game director Marty Stratton and creative director Hugo Martin acknowledged that the context of the first Doom’s violence had changed greatly over the years. And while the original’s violence may have been seen as horrific and shocking, they wanted the reboot to skew closer to cartoonishly entertaining or, as they put it, less Saw and more Evil Dead 2.

“We were going for smiles, not shrieks,” Martin said, adding, “What we found with violence is that more actually makes it safer, I guess, or just more acceptable. It pushes it more into the fun zone. Because if it’s a slow trickle of blood out of a slit wrist, that’s Saw. That’s a little bit unsettling, and sort of a different type of horror. If it’s a comical fountain of Hawaiian Punch-looking blood out of someone’s head that you just shot off, that’s comic book. That’s cartoonish, and that’s what we wanted.”

“They’re demons,” Stratton said. “We don’t kill a single human in all of Doom. No cursing, no nudity. No killing of humans. We’re actually a pretty tame game when you think about it. I’ve played a lot of games where you just slaughter massive amounts of human beings. I think if we had to make some of the decisions we make about violence and the animations we do and if we were doing them to humans, we would have completely different attitudes when we go into those discussions. It’s fun to sit down in a meeting and think about all the ways it would be cool to rip apart a pinky demon or an imp. But if we had the same discussions about, ‘How am I going to rip this person in half?’ or rip his arm off and beat him over the head with it, it takes on a different connotation that I don’t know would be as fun.”

That balancing act between horror and comedy paid off for the reboot, but it was by no means the only line last year’s Doom had to straddle. There was also the question of what a modern Doom game would look like. The first two Doom games were fast-paced shooters, while the third was a much slower horror-tinged game where players had to choose between holding a gun or a flashlight at the ready. Neither really fit into the recent mold of AAA shooters, and the developers knew different people would have very different expectations for a Doom game in 2016.

As Stratton explained, “At that point, we went to, ‘What do we want? What do we think a Doom game should be moving forward?’As much as we always consider how the audience is going to react to the game–what they’re thinking, and what we think they want–back in the very beginning, it was, ‘What do we think Doom should be, and what elements of the game do we want to build the future of Doom on?’ And that’s really where we came back to Doom 1, Doom II, the action, the tone, the attitude, the personality, the character, the irreverence of it… those were all key words that we threw up on the board in those early days. And then mechanically, it was about the speed. It was about unbelievable guns, crazy demons, really being very honest about the fact that it was Doom. It was unapologetic early on, and we built from there.”

It helped that they had a recent example of how not to bring Doom into the current generation. Prior to the Doom reboot, id Software had been working on Doom 4, which Stratton said was a good game, but just didn’t feel like Doom. For one, it cast players as a member of a resistance army rather than a one-marine wrecking crew. It was also slower from a gameplay perspective, utilizing a cover-based system shared by numerous modern shooters designed to make the player feel vulnerable.

“None of us thought that the word ‘vulnerable’ belonged in a proper Doom game,” Martin said. “You should be the scariest thing in the level.”

Doom 4 wasn’t a complete write-off, however. The reboot’s glory kill system of over-the-top executions actually grew out of a Doom 4 feature, although Stratton said they made it “faster and snappier.”

Of course, not everything worked as well. At one point the team tried giving players a voice in their ears to help guide them through the game, a pretty standard first-person shooter device along the lines of Halo’s Cortana. Stratton said while the device works well for other franchises, it just didn’t feel right for Doom, so it was quickly scrapped.

“We didn’t force anything,” Stratton said. “If something didn’t feel like Doom, we got rid of it and tried something that would feel like Doom.”

That approach paid off well for the game’s single-player mode, but Stratton and Martin suggested they weren’t quite as thrilled with multiplayer. Both are proud of the multiplayer (which continues to be worked on) and confident they delivered a high quality experience with it, but they each had their misgivings about it. Stratton said if he could change one thing, it would have been to re-do the multiplayer progression system and give more enticing or better placed “hooks” to keep players coming back for game after game. Martin wished the team had messaged what the multiplayer would be a little more clearly, saying too many expected a pure arena shooter along the lines of Quake 3 Arena, when that was never the development team’s intent.

Those issues aside, it’s clear the pair feel the new wrinkles and changes they made to the classic Doom formula paid off more often than not.

“Lots worked,” Stratton said. “That’s probably the biggest point of pride for us. The game really connected with people. We always said we wanted to make something that was familiar to long-time fans, felt like Doom from a gameplay perspective and from a style and tone and attitude perspective. And I think we really accomplished that at a high level. And I think we made some new fans, which is always what you’re trying to do when you have a game that’s only had a few releases over the course of 25 years… You’re looking to bring new people into the genre, or into the brand, and I think we did that.”

Courtesy-GI.biz

Return Of The Samsung Galaxy Note 7?

March 29, 2017 by  
Filed under Mobile

Tech giant Samsung Electronics Co Ltd has announced that it plans to offer refurbished versions of the Galaxy Note 7 smartphones, the model pulled from markets last year due to fire-prone batteries.

Samsung’s Note 7s were permanently scrapped in October following a global recall, roughly two months from the launch of the near-$900 devices, after some phones self-combusted. A subsequent probe found manufacturing problems in batteries supplied by two different companies – Samsung SDI Co Ltd and Amperex Technology Ltd.

Analysis from Samsung and independent researchers found no other problems in the Note 7 devices except the batteries, raising speculation that Samsung will recoup some of its losses by selling refurbished Note 7s.

A person familiar with the matter told Reuters in January that it was considering the possibility of selling refurbished versions of the device or reusing some parts.

Samsung’s announcement that revamped Note 7s will go back on sale, however, surprised some with the timing – just days before it launches its new S8 smartphone on Wednesday in the United States, its first new premium phone since the debacle last year.

Samsung, under huge pressure to turn its image around after the burning battery scandal, had previously not commented on its plans for recovered phones.

“Regarding the Galaxy Note 7 devices as refurbished phones or rental phones, applicability is dependent upon consultations with regulatory authorities and carriers as well as due consideration of local demand,” Samsung said in a statement.

South Korea’s Electronic Times newspaper, citing unnamed sources, said on Tuesday Samsung will start selling refurbished Note 7s in its home country in July or August and will aim to sell between 400,000 and 500,000 of the Note 7s using safe batteries.

Samsung said in a statement to Reuters the company has not set specifics on refurbished Note 7 sales plans, including what markets and when they would go on sale, though noting the phones will not be sold in India as some media reported earlier this year.

The firm said refurbished Note 7s will be equipped with new batteries that have gone through Samsung’s new battery safety measures.

“The objective of introducing refurbished devices is solely to reduce and minimize any environmental impact,” it said.

Uber Calls It Quit In Another Market

March 29, 2017 by  
Filed under Around The Net

Ride-hailing group Uber Technologies will discontinue offering services in Denmark next month due to a taxi law that puts into effect new requirements for drivers such as mandatory fare meters, the company said on Tuesday.

Uber has faced headwinds since its app went online in Denmark in 2014 as local taxi driver unions, companies and politicians complained that Uber posed unfair competition by not meeting legal standards required for established taxi firms.

Uber, which says about 2,000 Danish drivers and 300,000 riders use its app, said in a statement that it would shut down its services in Denmark on April 18 due to the new law.

Despite the minority liberal government’s ambitions to deregulate the taxi business and accommodate new operations like Uber, the taxi law presented in February introduced measures such as mandatory fare meters and seat sensors.

“For us to operate in Denmark again the proposed regulations need to change. We will continue to work with the government in the hope that they will update their proposed regulations and enable Danes to enjoy the benefits of modern technologies like Uber,” Uber said.

Two Danish Uber driver were fined in November for violating taxi laws and in December Uber’s European division was indicted by Danish public prosecutors on charges of assisting those drivers in violating taxi laws.

Uber said it would allocate resources to help Danish Uber drivers through the shutdown process.

Is Java Script The Most Popular Language?

March 29, 2017 by  
Filed under Computing

Beancounters at RedMonk have taken time out from their busy prayer wheels to create a list of the world’s most popular programming languages.

The list is based on data from both GitHub and Stack Overflow and the Red Monks have chanted a top 10 list for 2017.

1: JavaScript
2: Java
3: Python
4: PHP
5: (tie) C# and C++
6: (tie) Ruby and CSS
7: C
8: Objective-C

While there was little change in the top ten, there were a few stat changes in the also rans. This was mostly because GitHub data now counts the number of pull requests rather than the number of repositories.

As a result, Swift was a major beneficiary of the new GitHub process, jumping eight spots from 24 to 16.

For those who came in late, Swift was supposed to be the Great White Hope and which gave way to scepticism. The language appears to be entering something of a trough of disillusionment, but the Red Monks seem to think that Swift has reached a Top 15 ranking faster than any other language it has tracked since it has been doing the rankings.

TypeScript also did well, moving up 17 points and PowerShell moved from 36 to 19.

One of the biggest overall gainers of any of the measured languages, Rust leaped from 47 on the board to 26 one spot behind Visual Basic.

Courtesy-Fud

Can Microsoft Make Game Pass Profitable?

March 29, 2017 by  
Filed under Gaming

Of all the various innovations we’ve seen in this console generation, it may be the business model changes that have the most lasting impact on the games industry. Though originally introduced in the back half of the previous generation, the notion of giving consumers “free” games on a monthly basis for continuing their subscription to console online services has become a standard part of the model in this hardware generation.

The degree to which this is expected, and to which the perceived quality of each month’s offerings is hotly debated, is a clear signal of how the value relationship between consumers and game software is changing. Now, within the next few months, both Microsoft and Sony will evolve that relationship even further, with services which aim to give consumers access to current-gen game software through a very different transaction model.

Microsoft was first out of the blocks with its announcement, revealing at the end of last month that a large library of software for the Xbox One will be made available for a $9.99 recurring monthly subscription. Sony’s version of the concept is similar in business terms, if dramatically different technologically; it’s going to start adding PS4 titles to PS Now, a game-streaming service which currently offers a huge library of PS3 games for a $20 recurring subscription (or $45 for three months, which gets it a little closer to Microsoft’s pricing).

The goal being pursued by both firms is fairly obvious; paying monthly rather than buying titles outright is the model which has become dominant for both music and video, so it stands to reason that games will follow down the same path, at least to some extent. There’s certainly some appeal to the idea of a “Netflix / Spotify For Games”. From a business perspective, getting $120 (or $180) from consumers in flat monthly fees for games is probably actually a revenue boost if the service is primarily picked up by the kind of consumers who don’t buy a lot of new games – either predominantly buying pre-owned, waiting for titles to hit bargain basement prices, or borrowing games from friends, for example.

On the other hand, there’s an abundance of consumers out there who buy far, far more than the two new games a year that you’d get for that $120 fee – so any of those who stop buying new games in favour of a subscription service will represent a major revenue loss to the industry. Many people will be worried about that possibility, no doubt, but the reality is that there’s plenty of precedent to suggest that a subscription service won’t harm sales of new games.

New titles won’t go directly onto a subscription service; there’ll undoubtedly be a lengthy exclusivity period for people who pay for a physical or digital copy of the game, with titles only appearing for subscribers once their revenue potential in direct sales is already all-but exhausted. Subscription revenue therefore becomes a second bite at the cherry – a way of boosting the industry’s often rather ratty-looking “long tail”.

From a consumer perspective, that’s actually not all that different from the way things are now. If you’re not bothered about playing a game in its first few months on the market, then you’re probably going to end up buying a second-hand copy – or getting it from the bargain bin, or borrowing it from a friend, or perhaps even just waiting for it to pop up on PlayStation Plus at some point.

Game software generally loses value dramatically after the first few months on the market; lots of options exist for picking it up cheap, but decades of experience shows that this doesn’t dissuade fans from buying new games they really care about. Games are a “zeitgeisty” medium; people want to be playing the game everyone else is playing right now (as anyone who’s had to put up with their social media feeds being filled to the brim with Zelda chat while every electronics store in the city remains out of stock of Switch can tell you – not that I’m bitter, of course).

For the industry, however, most of these options aren’t very appealing. Second-hand software sales enrich GameStop, and just about nobody else; there’s an argument that second-hand sales boost new software sales by providing trade-in value, but it’s hard to balance the effects of that against the simple revenue loss game creators suffer from the repeated recycling of second-hand stock through stores that often deliberately push consumers towards used games instead of new ones. Borrowing the game from a friend is arguably preferable to the industry; no money is changing hands at all, so at least potential revenue hasn’t been sucked out by a third party.

Given, then, that we’re already talking about consumers who have a range of options for accessing software which provide no revenue to game creators, something like a Netflix-esque subscription service starts to make a lot of sense. How the revenue works in the back-end will, no doubt, be subject to endless negotiation and dispute, but the point is that at least the revenue exists; games on the service will continue to generate cash for their creators as long as they’re being played, and every cent they receive is a cent they’d never have seen in the currently dominant second-hand models. Moreover, the existence of subscription services could be a net boost for the games industry as a whole; the ability to access a large library of software for an affordable monthly subscription fee is something that will appeal to a lot of consumers, potentially bringing them into the console ecosystem.

If the business case for these services is very clear, however, the question of which technical approach will succeed is rather less so. For now, I think that Microsoft’s model – allowing consumers to download and play locally the software on its subscription service – is comfortably superior to the PS Now streaming system.

Game streaming over the Internet remains a technology that’s arguably ahead of its time; there are question marks over the business case (since the provider needs to pay for racks and racks of hardware which every consumer using the service already possesses in their own home, a duplication of functionality that makes little sense, especially since PS Now recently dropped support for “thin client” platforms like Bravia TVs), but more importantly, a huge number of consumers simply won’t be able to make use of the service because their broadband connections are not up to the standard required for high-quality, real-time gameplay. The demands of real-time game streaming are very different from the demands of watching live streams of video, because you can’t buffer a real-time game stream; when it works, it’s impressive, but the reality is that for a great many consumers it either doesn’t work at all or only works at time when the network isn’t congested.

Given the limitations of PS Now (and I think the dropping of support on Bravia TVs, mobile phones and so on is an ominous sign for the future of the service), Microsoft’s native software approach seems far more likely to be a hit with its consumers – indeed, the company may be hoping to recapture some of the magic of the Xbox 360 era, when its enormous advantage over Sony in online services helped it to maintain a lead over the PS3 for several years.

For Sony’s part, the desire to try to boost PS Now may be its undoing, at least in the short term; but an enhanced version of PS Plus (PS Plus… Plus?) with a library subscription built-in seems like a no-brainer in the medium term. It’s a win-win situation for platform holders and game creators alike. The only really big loser in all of this will be heavily pre-owned reliant retailers like GameStop; if game subscription services truly take off this year, they’ll have to scramble to find a new model before it’s too late.

Courtesy-GI.biz

Did Radiation From Neighboring Galaxies Help Create Monster Black Holes?

March 29, 2017 by  
Filed under Around The Net

Bright radiation emitted by neighboring galaxies likely fueled the rapid growth of supermassive black holes in the early universe, a new study shows.

Ancient black holes range in size from millions to billions of solar masses and date back to as early as when the universe was less than 1 billion years old. Their early existence has puzzled scientists, as these cosmic beasts are thought to form over billions of years. 

Supermassive black holes can be found at the center of most, if not all, large galaxies, including the Milky Way. Using computer simulations, an international team of scientists found that the behemoths can grow rapidly after the bright radiation of a nearby galaxy heats up their host galaxy, which halts star formation. [The Strangest Black Holes in the Universe]

For the most part, molecular hydrogen cooled the tumultuous hydrogen and helium plasma of the early universe, allowing stars and galaxies to form. However, when a heat source, such as the radiation of a nearby galaxy, destroys most of the molecular hydrogen, the gas within the galaxy can’t cool as efficiently and, as a result, stars can’t form. The leftover gas, dust and unborn star material would have then eventually collapsed into a supermassive black hole, John Wise, co-author of the study and a physicist at the Georgia Institute of Technology, told Space.com. 

While supermassive black holes are known to form following the death of massive stars — an event also known as a supernova — this process is believed to last over billions of years. The new study, however, helps to explain how many of these behemoths formed so quickly in the early universe. 

“The collapse of the galaxy and the formation of a million-solar-mass black hole takes 100,000 years — a blip in cosmic time,” Zoltan Haiman, an astronomy professor at Columbia University and another co-author of the study, said in a statement from Georgia Tech. “A few hundred million years later [in the simulation], it has grown into a billion-solar-mass supermassive black hole. This is much faster than we expected.”

Originally, scientists thought that the nearby galaxy in this scenario “would have to be at least 100 million times more massive than our sun to emit enough radiation to stop star formation,” the researchers said in the statement. However, the new simulation suggests that the neighboring galaxy could be smaller and closer than expected. In fact, the “Goldilocks” neighboring galaxy can’t be too hot or too cold, Wise said.

“If the nearby galaxy were too close (too hot), then the radiation would start to ‘evaporate’ the gas cloud that is collapsing to form a massive black hole,” Wise told Space.com. On the other hand, “if the nearby galaxy were too far (too cold), then the radiation would not be strong enough to destroy enough molecular hydrogen,” and the gas and dust would be able to cool and form stars rather than a massive black hole, he added.

Using NASA’s James Webb Space Telescope, which is expected to launch in 2018, the researchers plan to further study the collapse of such massive black holes and “what type of galaxy forms around one of these massive black hole seeds,” Wise said. 

“Understanding how supermassive black holes form tells us how galaxies, including our own, form and evolve and, ultimately, tells us more about the universe in which we live,” John Regan, lead author of the study and a postdoctoral researcher at Dublin City University, said in the statement from Georgia Tech.

Their recent findings were detailed March 13 in the journal Nature Astronomy.

Courtesy-Space

Apple Wins Patent Dispute In China

March 28, 2017 by  
Filed under Mobile

A Chinese court has ruled in favor of Apple in design patent lawsuit between the Cupertino, California company and a domestic phone-maker, overturning a ban on selling iPhone 6 and iPhone 6 Plus phones in China, Xinhua news agency reported.

Last May, a Beijing patent regulator ordered Apple’s Chinese subsidiary and a local retailer Zoomflight to stop selling the iPhones after Shenzhen Baili Marketing Services lodged a complaint, claiming that the patent for the design of its mobile phone 100c was being infringed by the iPhone sales.

Apple and Zoomflight took the Beijing Intellectual Property Office’s ban to court.

The Beijing Intellectual Property Court has revoked the ban, saying Apple and Zoomflight did not violate Shenzhen Baili’s design patent for 100c phones.

The court ruled that the regulator did not follow due procedures in ordering the ban while there was no sufficient proof to claim the designs constituted a violation of intellectual property rights.

Representatives of Beijing Intellectual Property Office and Shenzhen Baili said they would take time to decide whether to appeal the ruling, according to Xinhua.

In a related ruling, the same court denied a request by Apple to demand stripping Shenzhen Baili of its design patent for 100c phones.

Apple first filed the request to the Patent Reexamination Board of State Intellectual Property Office. The board rejected the request, but Apple lodged a lawsuit against the rejection.

The Beijing Intellectual Property Court on Friday ruled to maintain the board’s decision. It is unclear if Apple will appeal.

Is Microsoft Blocking Kaby Lake And Ryzen From Users?

March 28, 2017 by  
Filed under Computing

Software king of the world Microsoft is locking down system updates for those using AMD’s Ryzen and Intel’s Kaby Lake processors on Windows 7 and 8.1.

Users are now starting to encounter the following error message: “Your PC uses a processor that isn’t supported on this version of Windows.”

This message appears when a user attempts to update their OS and a quick look at Microsoft’s support page reveals upgrading to Windows 10 is the only way to fix the problem.

Microsoft’s support page on the matter says that Windows 10 is the ‘only’ OS to support these updated hardware configurations. You will need Windows 10 if running Kaby Lake or newer, AMD’s Bristol Ridge or newer (this includes Ryzen), or the Qualcomm 8996 and want to receive important updates to remain secure.

Those who own these chips should not be surprised, and indeed those who spend money on getting the latest chips should probably not be using Windows 7 or 8 anyway. AMD warned that this would be happening in February.

At the time, it said it would not be releasing drivers for Ryzen running on Windows 7. Intel hinted that something similar would happen for Kaby Lake support last year.

The question really is one of ethics. Windows 8.1 won’t hit its end of life until next year, Vole is switching off its support early for new chips.

Courtesy-Fud

Will Gigabit LTE Smartphones Take Off This Year?

March 28, 2017 by  
Filed under Mobile

It has been quite some time since Qualcomm announced Snapdragon X16, the world’s first Gigabit LTE modem. The same GigabitLTE Snapdragon X16 modem is now part of the Snapdragon 835 – a 10nm SoC that is about to debut in a dozen high end phones.

Many people who are not close to the matter are having a hard time to understand why it’s important to get faster modems in an everyday device. Many moan that the speeds they are getting from their carriers are not even touching the Cat 4 maximum speed of 150 Mbps on a download but they are forgetting that these are the best case scenario speeds for Cat 4. What happens is that the average speed increases with new technology as most carriers are now using the Cat 6 300 Mbps maximum speed network.

Today, Telstra in Australia, Sprint in the USA, EE in the UK and a few others have announced or have already deployed their versions of the Cat 16 category GigabitLTE capable of sub 1 Gbps speeds.

It’s a typical technology cat and mouse game. We need faster phones to get the faster internet from carriers. What many people need to understand is that they won’t really get 1 Gbps download speeds as this is a maximum, but the average speed might increase for many.

If you are getting – let’s say – 30 to 60 Mbps today with Cat 6, a Gigabit LTE could increase your speeds to 60 Mbps to 120 Mbps. In our case, in Vienna Austria, we see around 80 Mbps to 100 Mbps, and GigabitLTE could double the speed to 160 Mbps to 200 Mbps. You would need a GigabitLTE phone as well as a GigabitLTE capable network to get to the GigabitLTE speeds. There are two options – the Snapdragon 835 powered phone or the Samsung Exynos 8895. They both support GigabitLTE speeds and the launch of GigabitLTE phones will speed up the deployment of this technology worldwide.

Don’t forget that Samsung Galaxy S8 is likely to ship with both Exynos 8895 and Snapdragon 835, both supporting GigabitLTE speeds.

With the mass introduction of the Snapdragon 835 and Exynos 8895 phones starting with the Samsung Galaxy S8, followed by GigabitLTE deployment by the carriers, we expect that the average download and upload speed will increase, enabling the next generation of content and applications. It looks likely that AT&T, T-Mobile and Sprint are already committed to the GigabitLTE, likely coming this year. Worldwide, there are 15 companies who plan to launch GigabitLTE this year.

If you are one of the skeptical ones that say we don’t need faster internet on the phone, I can remember one very rich man that goes by the name of Bill Gates who wasn’t convinced in the success of the internet. That definitely doesn’t mean that he was right about it, as now even Gates and the rest of the world have the capability of 100s of Mbps speeds on a smartphone device, something that didn’t really exist just a decade ago.

The same performance delta can be associated with internet speed as 3G stopped at 3.6 Mbps / 7.2Mbps. Speed eventually got to 21.6 Mbps with HSPA+. That was some ten years ago and today it is normal to have a Cat 6 LTE 4K network capable of 300 Mbps and, in some cases, advanced carriers get to 600 Mbps, and in the case of Telstra, it even gets to 1Gbps speeds. Qualcomm is planning to ship Snapdragon X20 with 1.2 Gbps maximum speeds in early 2018 and it is already sampling a modem that exceeds GigabitLTE’x magical number.

GigabitLTE with 1Gbps speed is just an introduction to 5G speeds, and it can be viewed as a gateway to 5G. 5G is a new communication technology that will enable a huge technology leap. One of the things that may become a reality is 4K or even 4K 360 video as the default. This will push the need for more and higher resolution VR capable Head Mounted Devices (HMD) and enable new games and applications that we cannot even imagine today.

Think about Facebook live with 360 VR capabilities? We don’t think that this is far off.

Courtesy-Fud

Twitter Mulls Subscription Based Model

March 27, 2017 by  
Filed under Around The Net

Twitter Inc is weighing whether to build a premium version of its popular Tweetdeck interface aimed at professionals, the company has announced, raising the possibility that it could charge subscription fees for some users for the first time.

Like most other social media companies, Twitter since its founding 11 years ago has focused on building a huge user base for a free service supported by advertising. Last month it reported it had 319 million users worldwide.

But unlike the much-larger Facebook Inc, Twitter has failed to attract enough in advertising revenue to turn a profit even as its popularity with U.S. President Donald Trump and other celebrities makes the network a constant center of attention.

Subscription fees could come from a version of Tweetdeck, an existing interface that helps users navigate Twitter.

Twitter is conducting a survey “to assess the interest in a new, more enhanced version of Tweetdeck,” spokeswoman Brielle Villablanca has said in a statement.

She went on: “We regularly conduct user research to gather feedback about people’s Twitter experience and to better inform our product investment decisions, and we’re exploring several ways to make Tweetdeck even more valuable for professionals.”

There was no indication that Twitter was considering charging fees from all its users.

Word of the survey had earlier leaked on Twitter, where a journalist affiliated with the New York Times posted screenshots of what a premium version of Tweetdeck could look like.

That version could include “more powerful tools to help marketers, journalists, professionals, and others in our community find out what is happening in the world quicker,” according to one of the screenshots posted on the account @andrewtavani.

The experience could be ad-free, the description said.

Other social media firms, such as Microsoft Corp’s LinkedIn unit, already have tiered memberships, with subscription versions that offer greater access and data.

In the fourth quarter of 2016, Twitter posted the slowest revenue growth since it went public four years earlier, and revenue from advertising fell year-over-year. The company also said that advertising revenue growth would continue to lag user growth during 2017.

Is The U.S. Losing The Supercomputer Race?

March 27, 2017 by  
Filed under Computing

Advanced computing experts at the National Security Agency and the Department of Energy are worried that that China is “extremely likely” to take the leadership in supercomputing as early as 2020.

A report with the catchy title “U.S. Leadership in High Performance Computing” has been penned by HPC technical experts at the NSA, the DOE, the National Science Foundation and several other agencies.

It said that China’s supercomputing advances are not only putting national security at risk, but also US leadership in high-tech manufacturing.

If China succeeds, it may “undermine profitable parts of the U.S. economy,” the report warns. Of course, it does not matter – the US government is going to start investing in private coal companies soon and that will sort the whole mess out. Nothing says high-tech like a coal powered factory. We are sure Isambard Kingdom Brunel could come up with a steam powered supercomputer, if he were alive, and American.

Of course the report will be dismissed by the current US government as it is written by scientists and no one believes them any more – after all they think the world is older than 6,000 years and that God is going to wipe us out with another flood, which he promised not to do.

The report said that it is easy for Americans to draw the wrong conclusions about what HPC investments by China mean — without considering China’s motivations.

“These participants stressed that their personal interactions with Chinese researchers and at supercomputing centres showed a mind-set where computing is first and foremost a strategic capability for improving the country; for pulling a billion people out of poverty; for supporting companies that are looking to build better products, or bridges, or rail networks; for transitioning away from a role as a low-cost manufacturer for the world; for enabling the economy to move from ‘Made in China’ to ‘Made by China’”.

Courtesy-Fud

Will AMD’s Polaris Based RX 500 Launch April 18th?

March 27, 2017 by  
Filed under Computing

According to reports, the upcoming AMD Radeon RX 500 series, which should be based on Polaris GPUs, could be slightly delayed, with the new launch date set for April 18th.

While earlier information suggested that the Polaris 10-based Radeon RX 570/580 should be coming on April 4th, with Polaris 11-based RX 550/560 refresh coming a week later, on April 11th, a new report from China site Mydrivers.com, spotted by eTeknix.com, suggests that the launch date has been pushed back to April 18th.

As we’ve written before, the new Radeon RX 500 series will be based on an existing AMD Polaris GPU architecture but should have somewhat higher clocks and improved performance-per-watt while the flagship Vega GPU based Radeon RX Vega, should be coming at a later date, most likely at Computex 2017 show, starting on May 30th.

Unfortunately, the precise details regarding the upcoming Radeon RX 500 series are still unknown but hopefully these performance and clock improvements will allow AMD to compete with Nvidia’s mainstream lineup.

Courtesy-Fud

Trello Updated To Integrate With BitBucket, Jira, HipChat And Confluence

March 24, 2017 by  
Filed under Around The Net

Trello will be linked into the entire Atlassian ecosystem with a series of integrations announced this week. The new “power-ups” for the project management software connect it with BitBucket, Jira, HipChat and Confluence, to help customers get their work done more efficiently.

Using Trello is intended to help users keep their projects organized. The service lets people lay out virtual cards in columns on a workspace known as a board. Doing so can help with things like tracking the status of software bugs or tracking contracts through different stages of completion.

Each of the connections announced Wednesday is supposed to help with the process of using Trello. Confluence users can now tie cards to new pages in Atlassian’s content management system, Jira users can connect issues from the bug tracker with cards and BitBucket users can better organize their code.

The integrations come two months after Atlassian announced that it would be acquiring Trello. They show a glimpse of a future where the project management software is increasingly tied into the other products that Atlassian owns.

Customers were asking for the integrations as soon as Atlassian’s acquisition of Trello was announced, according to Hamid Palo, the director of product and partnerships at Trello. Overall, the goal behind them is to minimize how much users have to switch between different services, in order to save time.

The acquisition and power-ups don’t mean that competing services will be boxed out of connecting with the work tracking software, Palo said.

“We’re going to continue making Trello awesome, we’re going to integrate with all of the tools that people use with Trello, and that is not going to change,” he said.

All of the integrations announced on Wednesday are available immediately, for no extra cost.

Next Page »