For the second time in as many weeks, developers of the popular LastPass password manager are working to patch a serious vulnerability that could allow malicious websites to steal user passwords or infect computers with malware.
Like the LastPass flaws patched last week, the new issue was discovered and reported to LastPass by Tavis Ormandy, a researcher with Google’s Project Zero team. The researcher revealed the vulnerability’s existence in a message on Twitter, but didn’t publish any technical details about it that could allow attackers to exploit it.
According to Ormandy, the flaw affects the latest version of the LastPass browser extension for all major browsers. He claims to have tested the exploit successfully on Windows and Linux, but believes that it likely works on Mac as well.
If the extension’s binary component is also installed, the vulnerability allows attackers to execute malicious code on users’ computers when they visit a rogue website. If the component is not present, the flaw can still be used to extract passwords from users’ secure password vaults.
To make things worse, it seems the extension’s presence in the browser is enough for the flaw to be exploitable. Ormandy said on Twitter that the attack still works even if the user is logged out.
This is supposedly true only for the remote code execution attack, because without a logged-in session the password vault would remain encrypted and not accessible to a website.
“We are now actively addressing the vulnerability,” the LastPass developers said Monday in a blog post. “This attack is unique and highly sophisticated. We don’t want to disclose anything specific about the vulnerability or our fix that could reveal anything to less sophisticated but nefarious parties.”
LastPass recommends that users launch websites for which they have stored passwords directly from inside their password vaults by using the “launch” feature. The company also advises users to turn on two-factor authentication for any online services that offer this option and to beware of phishing attacks and potentially malicious links.
The U.S. Department of Commerce has agreed to remove Chinese telecommunications equipment maker ZTE Corp from a trade blacklist after the company pleaded guilty to violating sanctions on Iran and agreed to pay nearly $900 million, the agency said in a notice.
Removal from the list marks the end of a tense period for ZTE, which faced trade restrictions that could have severed its ties to critical U.S. suppliers.
“By acknowledging the mistakes we made, taking responsibility for them … we are committed to a ZTE that is fully compliant, healthy and trustworthy,” said ZTE Chief Executive Zhao Xianming said in an emailed statement.
Last year, the U.S. Commerce Department placed export restrictions on ZTE as punishment for violating U.S. sanctions against Iran. The restrictions would have prevented restricted suppliers from providing ZTE any U.S.-made equipment, potentially freezing the Chinese handset maker’s supply chain.
Over the past 12 months, as ZTE cooperated with U.S. authorities, the U.S. Commerce Department temporarily suspended the trade restrictions with a series of three-month reprieves, allowing the company to maintain ties to U.S. suppliers.
Earlier this month, ZTE agreed to pay a total of $892.4 million and pleaded guilty to violating U.S. sanctions by sending American-made technology to Iran and lying to investigators.
The Commerce Department said on Tuesday it would impose severe restrictions on former ZTE CEO Shi Lirong, whom the agency accused of approving efforts to skirt sanctions and ship equipment to Iran.
The Commerce Department said Shi approved a systematic, written business plan to use shell companies to secretly export U.S. technology to Iran. Reuters could not immediately reach Shi for comment.
The U.S. investigation followed reports by Reuters in 2012 that ZTE had signed contracts with Iran to ship millions of dollars’ worth of hardware and software from some of America’s best-known technology companies.
U.S. authorities have said the size of the financial penalty against ZTE also reflects the fact that the company lied to investigators when executives were approached about the allegations.
As part of the deal, ZTE will be under probation for three years and agreed to cooperate in the continuing investigation.
Over the weekend, Intel pushed ahead with the release of its first consumer and enterprise SSD based on 3D XPoint technology, with latency rates roughly one hundred times lower than NAND flash alternatives that have dominated the market since 2007.
The first Optane-branded storage device is called the Optane SSD DC P4800X, which the company says is designed to be used either as high-performance storage or as a caching device in data centers. The card features a capacity of 375GB, with latency of under 10 microseconds (10µs), along with 550,000 random 4K reads, 500,000 random 4K writes, and an overall endurance rating of 12.3 petabytes written (PBW).
3D XPoint memory is about 100 times lower latency than NAND flash, sits right under DRAM (faster), but really puts some pressure on the data center market in terms of access times and endurance ratings. Intel claims that the low latency and high endurance can yield between eight and 40 times faster responses under large workloads, especially for database applications, while consistently outperforming NAND-based technologies.
Originally, the company’s plan was to release 16GB and 32GB Optane storage products under the Intel Optane Memory 8000p series. These units were capable of reaching up to 300,000 random 4K reads and 120,000 random 4K writes, and up to 1,600MB/s sequential reads and 500MB/s sequential writes. The release date for these smaller configurations is currently unknown but are still scheduled for release sometime later this year.
The first noticeable benefit to using Optane as a storage product for enterprise users is the option to significantly upgrade the overall capacity of onboard RAM. For instance, Intel’s dual-socket Xeon systems can support up to 3TB of DRAM but are able to accommodate an additional 24TB of Optane storage. Quad-socket systems, on the other hand, can accommodate 12TB of DRAM and an additional 48TB of Optane storage.
Not cheap – $1,520 at launch, compatible with Kaby Lake
The Intel Optane P4800X 375GB PCI-E add-in card will initially be a very application-specific product for “creative professionals” and enterprise users who need low-latency caching at every point in their systems – from onboard CPU cache, to storage, to DRAM. The other usage model will be for enterprise users who need substantially more memory available to their systems, even at a slightly higher latency cost. The company will initially release the 375GB PCI-E model at $1,520 with limited availability, followed by 375GB and 750GB U.2 models in Q2, and a 1.5TB PCI-E add-in card in the second half of the year.
We expect these modules to be compatible with current Z270 chipsets along with upcoming X299 chipsets due in fall.
Optane DIMMs come next year
This year, Intel is sticking to Optane products in the PCI-Express form factor, but next year plans to make the technology more flexible to performance and enterprise users in the form of individual Optane DIMMs. Pricing and spec options on such modules has yet to be discussed, though the technology available in both formats is expected to significantly boost applications that require large amounts of raw memory consumption.
When the original Doom was released in 1993, its unprecedentedly realistic graphic violence fueled a moral panic among parents and educators. Over time, the game’s sprite-based gore has lost a bit of its impact, and that previous sentence likely sounds absurd.
Given what games have depicted in the nearly quarter century since Doom, that level of violence no longer shocking so much as it is quaint, perhaps even endearing. So when it came time for id Software to reboot the series with last year’s critically acclaimed remake of Doom, one of the things the studio had to consider was exactly how violent it should be, and to what end.
Speaking with GamesIndustry.biz at the Game Developers Conference last month, the Doom reboot’s executive producer and game director Marty Stratton and creative director Hugo Martin acknowledged that the context of the first Doom’s violence had changed greatly over the years. And while the original’s violence may have been seen as horrific and shocking, they wanted the reboot to skew closer to cartoonishly entertaining or, as they put it, less Saw and more Evil Dead 2.
“We were going for smiles, not shrieks,” Martin said, adding, “What we found with violence is that more actually makes it safer, I guess, or just more acceptable. It pushes it more into the fun zone. Because if it’s a slow trickle of blood out of a slit wrist, that’s Saw. That’s a little bit unsettling, and sort of a different type of horror. If it’s a comical fountain of Hawaiian Punch-looking blood out of someone’s head that you just shot off, that’s comic book. That’s cartoonish, and that’s what we wanted.”
“They’re demons,” Stratton said. “We don’t kill a single human in all of Doom. No cursing, no nudity. No killing of humans. We’re actually a pretty tame game when you think about it. I’ve played a lot of games where you just slaughter massive amounts of human beings. I think if we had to make some of the decisions we make about violence and the animations we do and if we were doing them to humans, we would have completely different attitudes when we go into those discussions. It’s fun to sit down in a meeting and think about all the ways it would be cool to rip apart a pinky demon or an imp. But if we had the same discussions about, ‘How am I going to rip this person in half?’ or rip his arm off and beat him over the head with it, it takes on a different connotation that I don’t know would be as fun.”
That balancing act between horror and comedy paid off for the reboot, but it was by no means the only line last year’s Doom had to straddle. There was also the question of what a modern Doom game would look like. The first two Doom games were fast-paced shooters, while the third was a much slower horror-tinged game where players had to choose between holding a gun or a flashlight at the ready. Neither really fit into the recent mold of AAA shooters, and the developers knew different people would have very different expectations for a Doom game in 2016.
As Stratton explained, “At that point, we went to, ‘What do we want? What do we think a Doom game should be moving forward?’As much as we always consider how the audience is going to react to the game–what they’re thinking, and what we think they want–back in the very beginning, it was, ‘What do we think Doom should be, and what elements of the game do we want to build the future of Doom on?’ And that’s really where we came back to Doom 1, Doom II, the action, the tone, the attitude, the personality, the character, the irreverence of it… those were all key words that we threw up on the board in those early days. And then mechanically, it was about the speed. It was about unbelievable guns, crazy demons, really being very honest about the fact that it was Doom. It was unapologetic early on, and we built from there.”
It helped that they had a recent example of how not to bring Doom into the current generation. Prior to the Doom reboot, id Software had been working on Doom 4, which Stratton said was a good game, but just didn’t feel like Doom. For one, it cast players as a member of a resistance army rather than a one-marine wrecking crew. It was also slower from a gameplay perspective, utilizing a cover-based system shared by numerous modern shooters designed to make the player feel vulnerable.
“None of us thought that the word ‘vulnerable’ belonged in a proper Doom game,” Martin said. “You should be the scariest thing in the level.”
Doom 4 wasn’t a complete write-off, however. The reboot’s glory kill system of over-the-top executions actually grew out of a Doom 4 feature, although Stratton said they made it “faster and snappier.”
Of course, not everything worked as well. At one point the team tried giving players a voice in their ears to help guide them through the game, a pretty standard first-person shooter device along the lines of Halo’s Cortana. Stratton said while the device works well for other franchises, it just didn’t feel right for Doom, so it was quickly scrapped.
“We didn’t force anything,” Stratton said. “If something didn’t feel like Doom, we got rid of it and tried something that would feel like Doom.”
That approach paid off well for the game’s single-player mode, but Stratton and Martin suggested they weren’t quite as thrilled with multiplayer. Both are proud of the multiplayer (which continues to be worked on) and confident they delivered a high quality experience with it, but they each had their misgivings about it. Stratton said if he could change one thing, it would have been to re-do the multiplayer progression system and give more enticing or better placed “hooks” to keep players coming back for game after game. Martin wished the team had messaged what the multiplayer would be a little more clearly, saying too many expected a pure arena shooter along the lines of Quake 3 Arena, when that was never the development team’s intent.
Those issues aside, it’s clear the pair feel the new wrinkles and changes they made to the classic Doom formula paid off more often than not.
“Lots worked,” Stratton said. “That’s probably the biggest point of pride for us. The game really connected with people. We always said we wanted to make something that was familiar to long-time fans, felt like Doom from a gameplay perspective and from a style and tone and attitude perspective. And I think we really accomplished that at a high level. And I think we made some new fans, which is always what you’re trying to do when you have a game that’s only had a few releases over the course of 25 years… You’re looking to bring new people into the genre, or into the brand, and I think we did that.”
The list is based on data from both GitHub and Stack Overflow and the Red Monks have chanted a top 10 list for 2017.
5: (tie) C# and C++
6: (tie) Ruby and CSS
While there was little change in the top ten, there were a few stat changes in the also rans. This was mostly because GitHub data now counts the number of pull requests rather than the number of repositories.
As a result, Swift was a major beneficiary of the new GitHub process, jumping eight spots from 24 to 16.
For those who came in late, Swift was supposed to be the Great White Hope and which gave way to scepticism. The language appears to be entering something of a trough of disillusionment, but the Red Monks seem to think that Swift has reached a Top 15 ranking faster than any other language it has tracked since it has been doing the rankings.
TypeScript also did well, moving up 17 points and PowerShell moved from 36 to 19.
One of the biggest overall gainers of any of the measured languages, Rust leaped from 47 on the board to 26 one spot behind Visual Basic.
Software king of the world Microsoft is locking down system updates for those using AMD’s Ryzen and Intel’s Kaby Lake processors on Windows 7 and 8.1.
Users are now starting to encounter the following error message: “Your PC uses a processor that isn’t supported on this version of Windows.”
This message appears when a user attempts to update their OS and a quick look at Microsoft’s support page reveals upgrading to Windows 10 is the only way to fix the problem.
Microsoft’s support page on the matter says that Windows 10 is the ‘only’ OS to support these updated hardware configurations. You will need Windows 10 if running Kaby Lake or newer, AMD’s Bristol Ridge or newer (this includes Ryzen), or the Qualcomm 8996 and want to receive important updates to remain secure.
Those who own these chips should not be surprised, and indeed those who spend money on getting the latest chips should probably not be using Windows 7 or 8 anyway. AMD warned that this would be happening in February.
At the time, it said it would not be releasing drivers for Ryzen running on Windows 7. Intel hinted that something similar would happen for Kaby Lake support last year.
The question really is one of ethics. Windows 8.1 won’t hit its end of life until next year, Vole is switching off its support early for new chips.
It has been quite some time since Qualcomm announced Snapdragon X16, the world’s first Gigabit LTE modem. The same GigabitLTE Snapdragon X16 modem is now part of the Snapdragon 835 – a 10nm SoC that is about to debut in a dozen high end phones.
Many people who are not close to the matter are having a hard time to understand why it’s important to get faster modems in an everyday device. Many moan that the speeds they are getting from their carriers are not even touching the Cat 4 maximum speed of 150 Mbps on a download but they are forgetting that these are the best case scenario speeds for Cat 4. What happens is that the average speed increases with new technology as most carriers are now using the Cat 6 300 Mbps maximum speed network.
Today, Telstra in Australia, Sprint in the USA, EE in the UK and a few others have announced or have already deployed their versions of the Cat 16 category GigabitLTE capable of sub 1 Gbps speeds.
It’s a typical technology cat and mouse game. We need faster phones to get the faster internet from carriers. What many people need to understand is that they won’t really get 1 Gbps download speeds as this is a maximum, but the average speed might increase for many.
If you are getting – let’s say – 30 to 60 Mbps today with Cat 6, a Gigabit LTE could increase your speeds to 60 Mbps to 120 Mbps. In our case, in Vienna Austria, we see around 80 Mbps to 100 Mbps, and GigabitLTE could double the speed to 160 Mbps to 200 Mbps. You would need a GigabitLTE phone as well as a GigabitLTE capable network to get to the GigabitLTE speeds. There are two options – the Snapdragon 835 powered phone or the Samsung Exynos 8895. They both support GigabitLTE speeds and the launch of GigabitLTE phones will speed up the deployment of this technology worldwide.
Don’t forget that Samsung Galaxy S8 is likely to ship with both Exynos 8895 and Snapdragon 835, both supporting GigabitLTE speeds.
With the mass introduction of the Snapdragon 835 and Exynos 8895 phones starting with the Samsung Galaxy S8, followed by GigabitLTE deployment by the carriers, we expect that the average download and upload speed will increase, enabling the next generation of content and applications. It looks likely that AT&T, T-Mobile and Sprint are already committed to the GigabitLTE, likely coming this year. Worldwide, there are 15 companies who plan to launch GigabitLTE this year.
If you are one of the skeptical ones that say we don’t need faster internet on the phone, I can remember one very rich man that goes by the name of Bill Gates who wasn’t convinced in the success of the internet. That definitely doesn’t mean that he was right about it, as now even Gates and the rest of the world have the capability of 100s of Mbps speeds on a smartphone device, something that didn’t really exist just a decade ago.
The same performance delta can be associated with internet speed as 3G stopped at 3.6 Mbps / 7.2Mbps. Speed eventually got to 21.6 Mbps with HSPA+. That was some ten years ago and today it is normal to have a Cat 6 LTE 4K network capable of 300 Mbps and, in some cases, advanced carriers get to 600 Mbps, and in the case of Telstra, it even gets to 1Gbps speeds. Qualcomm is planning to ship Snapdragon X20 with 1.2 Gbps maximum speeds in early 2018 and it is already sampling a modem that exceeds GigabitLTE’x magical number.
GigabitLTE with 1Gbps speed is just an introduction to 5G speeds, and it can be viewed as a gateway to 5G. 5G is a new communication technology that will enable a huge technology leap. One of the things that may become a reality is 4K or even 4K 360 video as the default. This will push the need for more and higher resolution VR capable Head Mounted Devices (HMD) and enable new games and applications that we cannot even imagine today.
Think about Facebook live with 360 VR capabilities? We don’t think that this is far off.
Advanced computing experts at the National Security Agency and the Department of Energy are worried that that China is “extremely likely” to take the leadership in supercomputing as early as 2020.
A report with the catchy title “U.S. Leadership in High Performance Computing” has been penned by HPC technical experts at the NSA, the DOE, the National Science Foundation and several other agencies.
It said that China’s supercomputing advances are not only putting national security at risk, but also US leadership in high-tech manufacturing.
If China succeeds, it may “undermine profitable parts of the U.S. economy,” the report warns. Of course, it does not matter – the US government is going to start investing in private coal companies soon and that will sort the whole mess out. Nothing says high-tech like a coal powered factory. We are sure Isambard Kingdom Brunel could come up with a steam powered supercomputer, if he were alive, and American.
Of course the report will be dismissed by the current US government as it is written by scientists and no one believes them any more – after all they think the world is older than 6,000 years and that God is going to wipe us out with another flood, which he promised not to do.
The report said that it is easy for Americans to draw the wrong conclusions about what HPC investments by China mean — without considering China’s motivations.
“These participants stressed that their personal interactions with Chinese researchers and at supercomputing centres showed a mind-set where computing is first and foremost a strategic capability for improving the country; for pulling a billion people out of poverty; for supporting companies that are looking to build better products, or bridges, or rail networks; for transitioning away from a role as a low-cost manufacturer for the world; for enabling the economy to move from ‘Made in China’ to ‘Made by China’”.
According to reports, the upcoming AMD Radeon RX 500 series, which should be based on Polaris GPUs, could be slightly delayed, with the new launch date set for April 18th.
While earlier information suggested that the Polaris 10-based Radeon RX 570/580 should be coming on April 4th, with Polaris 11-based RX 550/560 refresh coming a week later, on April 11th, a new report from China site Mydrivers.com, spotted by eTeknix.com, suggests that the launch date has been pushed back to April 18th.
As we’ve written before, the new Radeon RX 500 series will be based on an existing AMD Polaris GPU architecture but should have somewhat higher clocks and improved performance-per-watt while the flagship Vega GPU based Radeon RX Vega, should be coming at a later date, most likely at Computex 2017 show, starting on May 30th.
Unfortunately, the precise details regarding the upcoming Radeon RX 500 series are still unknown but hopefully these performance and clock improvements will allow AMD to compete with Nvidia’s mainstream lineup.
Trello will be linked into the entire Atlassian ecosystem with a series of integrations announced this week. The new “power-ups” for the project management software connect it with BitBucket, Jira, HipChat and Confluence, to help customers get their work done more efficiently.
Using Trello is intended to help users keep their projects organized. The service lets people lay out virtual cards in columns on a workspace known as a board. Doing so can help with things like tracking the status of software bugs or tracking contracts through different stages of completion.
Each of the connections announced Wednesday is supposed to help with the process of using Trello. Confluence users can now tie cards to new pages in Atlassian’s content management system, Jira users can connect issues from the bug tracker with cards and BitBucket users can better organize their code.
The integrations come two months after Atlassian announced that it would be acquiring Trello. They show a glimpse of a future where the project management software is increasingly tied into the other products that Atlassian owns.
Customers were asking for the integrations as soon as Atlassian’s acquisition of Trello was announced, according to Hamid Palo, the director of product and partnerships at Trello. Overall, the goal behind them is to minimize how much users have to switch between different services, in order to save time.
The acquisition and power-ups don’t mean that competing services will be boxed out of connecting with the work tracking software, Palo said.
“We’re going to continue making Trello awesome, we’re going to integrate with all of the tools that people use with Trello, and that is not going to change,” he said.
All of the integrations announced on Wednesday are available immediately, for no extra cost.
A spokesman for Apple confirmed that the company acquired DeskConnect, the developer of the app, and the Workflow app, but did not provide further details.
Workflow, developed for the iPhone, iPad and Apple Watch, allows users to drag and drop combinations of actions to create workflows that interact with the apps and content on the device. It won an Apple design award in 2015 at its annual Worldwide Developers Conference.
Some of the examples of tasks for which Workflow can be used are making animated GIFs, adding a home screen icon to call a loved one and tweeting a song the user has been listening to, according to a description of the app.
Apple is keeping the app alive on its App Store and it has been made free, according to TechCrunch, which first reported the acquisition.
The company, which typically comments on its acquisitions with the standard line that “Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans,” went on to comment about the benefits of the app.
The app was selected for the Apple design award “because of its outstanding use of iOS accessibility features, in particular an outstanding implementation for VoiceOver with clearly labeled items, thoughtful hints, and drag/drop announcements, making the app usable and quickly accessible to those who are blind or low-vision,” Apple told TechCrunch.
It isn’t clear at this point how the app will be integrated with Apple’s offerings. Besides offering a standalone Workflow app, Apple may possibly look at integrating the technology into iOS with Siri being the key interface for many users, particularly for disabled people.
Developers of the popular LastPass password manager rushed to roll out a patch to fix a serious vulnerability that could have allowed attackers to steal users’ passwords or execute malicious code on their computers.
The vulnerability was discovered by Google security researcher Tavis Ormandy and was reported to LastPass on Monday. It affected the browser extensions installed by the service’s users for Google Chrome, Mozilla Firefox and Microsoft Edge.
According to a description in the Google Project Zero bug tracker, the vulnerability could have given attackers access to internal commands inside the LastPass extension. Those are the commands used by the extension to copy passwords or fill in web forms using information stored in the user’s secure vault.
If the extension’s binary component is installed, the “openattach” command can be used to run arbitrary code on the computer, Ormandy said on the bug tracker.
The LastPass developers deployed a workaround on their server to prevent exploitation and plan to include a full fix in new versions.
On Tuesday Ormandy reported another vulnerability in the Firefox extension that, according to the LastPass developers, was related to the first one. That vulnerability was fixed in a new version of the Firefox extension, 4.1.36a, that was released Wednesday.
“We have no indication that any of the reported vulnerabilities were exploited in the wild, but we’re doing a thorough review at this time to confirm,” the LastPass developers said in a blog post. “No password changes are required of users at this time.”
The two biggest cities in the U.S. — New York City and Los Angeles –still fall below many smaller U.S. cities in overall wireless performance, according to millions of field tests performed by RootMetrics in the second half of 2016.
The New York metro area, with 18 million people, ranked just 66th in the latest round of tests of the nation’s largest 125 metro areas. Meanwhile, L.A., with 12.1 million people, ranked 49th. In testing done by RootMetrics in the first half of last year, New York finished 59th, L.A., 99th.
L.A. improved in two of six measurements: call and data performance. New York’s drop was largely driven by a “steep decline” in network speed and data performance, RootMetrics said.
The reasons for New York’s decline — and declines in other cities — depend on multiple factors. “These metro rankings are relative; the most common reason for a ranking drop is not that performance is declining in a particular city, rather than performance is improving faster in other cities,” said Annette Hamilton, director at RootMetrics.
RootMetrics evaluates the nation’s four largest carriers using actual phones the carriers sell in tests conducted outdoors and inside buildings. Sometimes a carrier will temporarily take down service in a cell tower while improvements are made; also, a recent increase in the number of users and the rich video content they download could burden a cell tower’s capacity and affect performance. As some cities improve in overall performance, they can displace other top-ranked cities.
“While mobile performance is generally strong across most areas of the country, our data shows that not all metro areas are created equal when it comes to network performance,” RootMetrics said in a report.
Besides New York, other large metro areas dropped in several categories from the first half of 2016. Boston, the 10th largest in population, fell from 17th to 97th, finishing in the bottom on network reliability and call performance. Miami, fourth in population, dropped from 84th to 89th, due to a decline in network reliability and call performance.
Both Atlanta and Chicago declined from their top five finishes in early 2016. Chicago finished 8th overall in the latest tests, and dropped to 65th in text performance. Atlanta dropped from third to 23rd, with declines in all six categories that RootMetrics measures: overall performance, network reliability, network speed, data performance, call performance and text performance.
Hamilton said while Atlanta placed 23rd, it had a “stellar reputation for speed and data performance” with Verizon showing the fastest median download speed of 37.7Mbps. Further, while Boston came in 97th, three of the four wireless carrier there clocked median download speeds above 20Mbps, which she described as “more than fast enough to easily complete typical mobile tasks.”
In 2017, she added, “We expect to see metro rankings shift again as carriers continue to deploy new capabilities to meet mobile demands.”
Houston, the seventh-largest metro area, improved — moving from 51st to 18th. RootMetrics reported that all four carriers showed “superb” rates of getting connected and staying connected to the network during data reliability testing and saw a big leap in call performance.
The top five metro areas by overall performance were Indianapolis; Richmond, Va. ; Cleveland and Columbus, Ohio; and Minneapolis. The bottom five of the 125 measured were Hudson Valley, N.Y., in 121st place, descending to Springfield, Mass.; Santa Rosa, Calif.; Worcester, Mass.; and Omaha.
A broad coalition of advertising trade groups, ad buyers and sellers from Western Europe and the United States are pushing the industry to stop using annoying online marketing formats that have given rise to use of ad-blockers.
The types of ads the coalition has identified as falling below standard include pop-up advertisements, auto-play video ads with sound, flashing animated ads and full-screen ads that mask underlying content from readers or viewers.
The explosion of ad-blocking tools has launched a prolonged debate within the advertising industry over whether to rein in abusive ad practices or simply freeze out consumers who use ad blocker and still expect access to premium content.
The Coalition for Better Ads said on Wednesday it was publishing the voluntary standards after a study in which more than 25,000 web surfers and mobile phone users rated ads.
They identified six types of desktop web ads and 12 types of mobile ads as falling beneath a threshold of consumer acceptability and called on advertisers to avoid them.
Matti Littunen, research analyst at Enders Analysis focusing on digital media, said the ad formats identified by the coalition “have already been discouraged for years by these bodies and yet are still commonplace.”
The coalition is made up of major advertising associations from Britain, France, Germany and the United States, online ad platforms Google and Facebook, advertisers such as Procter & Gamble and Unilever and news publishers including News Corp, Washington Post and Thomson Reuters, the corporate parent of Reuters News.
“This is an opportunity, with the breadth of our participation, to actually not only capture what the consumer doesn’t want but also to really educate and take action to make that a reality in the online experience,” said Chuck Curran, a lawyer for the coalition, on a call with reporters.
“It’s that measurement of the point where the consumer is not just dissatisfied with the ad experience but actually more likely to use ad blockers and this is what we capture with the better ads standards.”
Ad-blocking, which has surged steadily since 2013, covered 615 million computer or mobile devices in 2016, up 30 percent from a year ago, according to estimates from Dublin-based PageFair, a firm that helps advertisers find ways to overcome blockers. That’s 11 percent of the world’s online populatio
It is no hidden secret that worldwide tablet shipments have progressively begun a decline over the past three years, since the industry began experiencing year-over-year slumps to the present number of 39.6 million units in Q4 2016.
Once again, for the second time in a year, the overall tablet market is expected to drop to under 40 million units in the first quarter of 2017, with an expected shipment rate of 39.03 million units. While shipment forecasts are subject to variability depending on the research groups assembling the numbers, Statista expects a total of 136 million units to ship this year, down from their estimated 150 million in 2016. Other sources like Digitimes say the total number of shipments was closer to 183 million units last year, though it has tempered its forecast this year by stating that quarterly and yearly declines will both be less severe compared to those seen in the first quarter of 2016.
According to reports from IDC, tablet sales are projected to continue declining all through the year, bringing their analysis on par with Statista’s projection. Most experts admit the reason is due to a growing demand for 2-in-1 Windows convertible PCs that offer the same ultra-thin profiles of slate tablets with more performance and a more robust productivity experience. However, other argue that 2-in-1 devices offer no substantive tradeoffs over tablets with keyboard cases as vendors like Apple, Samsung and Google now offer high-performing ARM-based chips with RAM and storage capacities to match most midrange PC notebooks.
Native stylus tablets will take on 2-in-1 convertible Windows PCs
Industry watchers expect that in 2017, more native tablet manufacturers will steadily join the convertible 2-in-1 PC business, while standouts like Apple will continue to market the iPad Pro as a decent hybrid convertible alternative to the Microsoft Surface series. The situation once again appears to be one where Apple takes dominance in the native tablet market, utilizing its unique marketing ploy to sell as many magnetically-latching Pencils and keyboard cases as it possibly can to attract market share away from Microsoft and similar Windows-based products with styluses from HP, Samsung, Lenovo, Dell, ASUS and others.
Depending on how fast Apple can ramp up tablet shipments this quarter, the success of its 10.5-inch and 12.9-inch iPad Pro models is expected to encourage native tablet manufacturers to make a stronger stand in the “hybrid tablet” market – or tablets that include detachable keyboard cases. Current competition in this space includes the Samsung Galaxy Note 10.1, Note Pro 12.2 and TabPro S, Huawei MateBook, Dell Venue 11 Pro 7000, and Google Pixel C.
Apple, Samsung and Microsoft all introducing new products in Q1-
The first quarter of a new year is traditionally a low season for consumer hardware sales, but this year the market can expect to see new product announcements from both US and Korea-based vendors including Apple, Microsoft and Samsung. The first product introduced last month was the Samsung Galaxy Tab S3 during Mobile World Congress, while Apple is expected to hold an announcement of its new trio of iPads sometime during mid-April to coincide with the launch of its new spaceship-like HQ building in Cupertino. Lastly, Microsoft is expected to announce its fifth-generation Surface tablet sometime before the end of spring, which is any time before June 20th.