Subscribe to:

Subscribe to :: TheGuruReview.net ::

OpenAPI Suffers Another Security Issue

July 1, 2016 by Michael  
Filed under Computing

A remote code execution flaw has been identified in the widely used OpenAPI framework, also known as the Swagger APIs, that will be easily exploited unless a patch is rushed out.

The disclosure was made this week when a module for the widely used Metasploit hacking tool was released, making it easier for criminals to exploit the flaw.

Metasploit is used by companies that build services using RESTful APIs, such as Microsoft, PayPal, Getty Images, Intuit and Apigee, to test the resilience of systems.

Swagger is an open source project that provides a standard, language-agnostic interface to RESTful APIs, which enables humans and computers to discover and understand the capabilities of a service without access to source code, documentation, or through network traffic inspection.

Scott Davis, application security researcher at Rapid7, explained in a blog post about the CVE-2016-5641 flaw that the disclosure “will address a class of vulnerabilities in a Swagger Code Generator in which injectable parameters in a Swagger JSON or YAML [a human-readable data serialisation language] file facilitate remote code execution. This vulnerability applies to NodeJS, PHP, Ruby, and Java and probably other languages as well.”

Other code-generation tools may also be vulnerable to parameter injection and could be affected by this approach.

“By leveraging this vulnerability, an attacker can inject arbitrary execution code embedded with a client or server generated automatically to interact with the definition of service,” Davis added.

“Within the Swagger ecosystem, there are fantastic code generators which are designed to automagically take a Swagger document and then generate stub client code for the described API.

“This is a powerful part of the solution that makes it easy for companies to provide developers the ability to quickly make use of their APIs. The Swagger definitions are flexible enough to describe most RESTful APIs and give developers a great starting point for their API client.”

The flaw is caused by code generators that do not take into account the possibility of a malicious Swagger definition document which results in a classic parameter injection with a “new twist on code generation”, according to Davis.

“Maliciously crafted Swagger documents can be used to dynamically create HTTP API clients and servers with embedded arbitrary code execution in the underlying operating system,” he explained.

“This is achieved by the fact that some parsers/generators trust insufficiently sanitized parameters in a Swagger document to generate a client code base.

“On the client side, a vulnerability exists in trusting a malicious Swagger document to create any generated code base locally, most often in the form of a dynamically generated API client.

“On the server side, a vulnerability exists in a service that consumes Swagger to dynamically generate and serve API clients, server mocks and testing specs.”

It is not yet known when a patch for the flaw will be released.

Courtesy-TheInq

 

Is nVidia Having Production Issues?

July 1, 2016 by Michael  
Filed under Computing

The rumor mill is flat out claiming that TSMC is getting the blame for a shortage of GTX 1080 and GTX 1070 supply issues. However, sources have been on the blower to say that is untrue, the lack of availability are generated by exceptionally great sales.

The 1080′s cards were launched in 27 May  and the GTX 1070 on 10 June, however stocks are scarcer than an intelligent post-Brexit plan in the UK. Even the over-priced Founders’ Edition cards are as rare as an apology from an Italian politician.

The rumor is that that TSMC is having trouble producing the 16nm FinFET chips that power the Pascal GPUs in the GTX 1080 and GTX 1070. However what we are seeing is that interest is overwhelming supply – the Geforce has been selling better than any high end card in the recent history.

The reason is simple – the card’s performance is exceptional and if you are in the market for $500+ card you definitely want the 1080 or the 1070. AMD so far has nothing new to offer as a Fury X replacement.

According to many leaks Radeon RX480 will launch tomorrow, June 29th, but as you should probably know by now, this card cannot compete with GTX 1080 or 1070. The performance of Radeon RX480 should be around between GTX 960 and GTX 970, which is quite good for the mainstream card.

Again, people who spend $500+ on GPUs want more than that – they want to play Doom and Battlefield 1, or similar high end at 1440 or 4K resolution and Ultra settings. This is what is causing the shortage of cards.

Courtesy-Fud

 

‘DoNotPay’ Robot Lawyer Helps Challenge Traffic Tickets

June 30, 2016 by mphillips  
Filed under Around The Net

Robots are no strangers to the law and legal matters thanks to tools like LawGeex, but one has emerged recently that appears to be a Robin Hood of the modern world.

DoNotPay is the brainchild of 19-year-old Stanford University student Joshua Browder, and it has already successfully contested some 160,000 parking tickets across London and New York. It’s free to use and has reportedly saved its users some $4 million in less than two years.

“DoNotPay has launched the UK’s first robot lawyer as an experiment,” the site explains. “It can talk to you, generate documents and answer questions. It is just like a real lawyer, but is completely free and doesn’t charge any commission.”

Earlier this week the bot was acknowledged on Twitter by the commissioner of the U.S. Federal Communications Commission.

DoNotPay’s artificially intelligent software uses a chat-like interface to interact with its users. It can also be used to help passengers on delayed airplane flights obtain compensation. Reportedly, Browder plans to extend the service to Seattle next. Meanwhile, he’s also working on helping HIV-positive people understand their rights and on a service for Syrian refugees.

All in all, Browder sees a bigger future for A.I. than the mundane tasks it typically handles today. As he said in a recent tweet, the “value in bots is not to order pizzas.”

 

 

Cisco Systems, Thales Sign Deal On Cybersecurity

June 30, 2016 by mphillips  
Filed under Around The Net

French electronics group Thales looking to boost its revenues by hundreds of millions of euros in the cybersecurity field through a strategic agreement it has signed with Cisco Systems, it said on Tuesday.

“We hope that with this agreement, we will add several hundred millions of euros in the next years,” said Jean-Michel Lagarde, who heads secure communications and information systems at Thales.

“It will have a multiplier effect, as this is not only about cybersecurity, but also about secure systems for cities and airports.”

The two companies have been partners since 2010 and plan to co-develop a solution to better detect and counter cyber attacks in real time, it said.

Thales generates 500 million euros ($550 million) annually in the cybersecurity business, notably in data protection thanks to the acquisition in March of Vormetric for 375 million euros.

The jointly developed solution will be aimed first at French infrastructure providers and will then be deployed globally, Cisco and Thales said in a statement.

 

 

Hacker Offering 10M Medical Records For Sale

June 29, 2016 by mphillips  
Filed under Around The Net

A hacker claims to have stolen nearly 10 million patient records and is now offering them for sell for about $820,000.

This past weekend, the hacker, called thedarkoverlord, began posting the sale of the records on TheRealDeal, a black market found on the deep Web. (It can be visited through a Tor browser.)

The data includes names, addresses, dates of birth, and Social Security numbers – all of which could be used to commit identity theft or access the patient’s bank accounts.

These records are being sold in four separate batches. The biggest batch includes 9.3 million patient records stolen from a U.S. health insurance provider, and it went up for sale on Monday.

The hacker used a little-known vulnerability within the Remote Desktop Protocol to break into the insurance provider’s systems, he said in his posting on the black market site.

The three other batches cover a total of 655,000 patient records, from healthcare groups in Atlanta, Georgia, Farmington, Missouri, and another city in the Midwestern U.S. The hacker didn’t give the names of the affected groups.

To steal these patient records, the hacker used “readily available plain text” usernames and passwords to access the networks where the data was stored, according to his sales postings.

Using an online message sent through the market, thedarkoverlord declined to answer any questions unless paid. The hacker wants a total of 1,280 bitcoins for the data he stole.

 

 

Added Benefit Of Cloud Computing: Less Energy Usage

June 29, 2016 by mphillips  
Filed under Computing

Just a decade ago, power usage at data centers was growing at an unsustainable rate, soaring 24% from 2005 to 2010. But a shift to virtualization, cloud computing and improved data center management is reducing energy demand.

According to a new study, data center energy use is expected to increase just 4% from 2014 to 2020, despite growing demand for computing resources.

Total data center electricity usage in the U.S., which includes powering servers, storage, networking and the infrastructure to support it, was at 70 billion kWh (kilowatt hours) in 2014, representing 1.8% of total U.S. electricity consumption.

Based on current trends, data centers are expected to consume approximately 73 billion kWh in 2020, becoming nearly flat over the next four years. “Growth in data center energy consumption has slowed drastically since the previous decade,” according to a study by the U.S. Department of Energy’s Lawrence Berkeley National Laboratory. “However, demand for computations and the amount of productivity performed by data centers continues to rise at substantial rates.”

Improved efficiency is most evident in the growth rate of physical servers.

From 2000 to 2005, server shipments increased 15% each year, resulting in a near doubling of servers in data centers. From 2005 to 2010, the annual shipment increases fell to 5%, but some of this decline was due to the recession. Nonetheless, this server growth rate is now at 3%, a pace that is expected to continue through 2020.

The reduced server growth rate is a result of the increase in server efficiency, better utilization thanks to virtualization, and a shift to cloud computing. This includes concentration of workloads in so-called “hyperscale” data centers, defined as 400,000 square feet in size and above.

Energy use by data centers may also decline if more work is shifted to hyperscale centers, and best practices continue to win adoption.

 

 

 

Is Intel Going To Dump McAfee

June 29, 2016 by Michael  
Filed under Uncategorized

Intel has run out of ideas about what it is going to do with it its security business and is apparently planning to flog it off.

Five years ago Intel bought McAfee for $7.7bn acquisition. Two years ago it re-branded it as Intel Security. There was talk about chip based security and how important this would be as the world moved to the Internet of Things.

Now the company has discussed the future of Intel Security with bankers, including potentially the outfit. The semiconductor company has been shifting its focus to higher-growth areas, such as chips for data center machines and Internet-connected devices, as the personal-computer market has declined.

The security sector has seen a lot of interest from private equity buyers. Symantec said earlier this month it was acquiring Web security provider Blue Coat for $4.65 billion in cash, in a deal that will see Silver Lake, an investor in Symantec, enhancing its investment in the merged company, and Bain Capital, majority shareholder in Blue Coat, reinvesting $750 million in the business through convertible notes.

However Intel’s move into the Internet of Things does make it difficult for it to exit the security business completely. In fact some analysts think it will only sell of part of the business and keep some key bits for itself.

Courtesy-Fud

 

Messaging App Line To Set IPO Price After Delay

June 28, 2016 by mphillips  
Filed under Mobile

Japanese messaging app firm Line Corp has held off on setting a tentative price range for its initial public offering (IPO) by one day, until Tuesday, the company said in a regulatory filing, citing the “market environment”.

The IPO price range was originally scheduled to be announced on Monday. Line still plans to list in New York on July 14 and in Tokyo the following day, the filing showed.

On Friday, the S&P 500 fell 3.6 percent, its biggest one-day drop in 10 months, and Japan’s broad Topix index slid 7 percent after Britain voted to exit the European Union.

The equity market in Japan recovered somewhat on Monday as the Topix closed up 1.8 percent, but the delay will allow the company to assess the market in New York and London on Monday before setting the tentative price range, a Line spokesman told Reuters.

Earlier this month, the company announced plans to sell 35 million new shares in an IPO, which would raise 98 billion yen ($963 million) at its initial reference price of 2,800 yen per share.

Line’s listing will go ahead according to its planned schedule, the company said on Friday.

Companies around the world are wrestling with the aftermath of the Brexit vote, which is likely to delay or disrupt upcoming takeovers and initial public offerings. Companies with direct exposure to the British economy are more likely to see their deals scuppered compared with those who are just caught up in global market volatility.

Line has little direct exposure to Britain or Europe. Its main markets are Japan, Indonesia, Taiwan and Thailand.

Line delayed its IPO by two years, buying time to fix weaknesses in weak financial reporting controls, bolster staffing and develop its business plan. But in doing so, it left billions of dollars on the table as its valuation shriveled.

 

EU, USA Reach Deal On Data-transfer Pact

June 28, 2016 by mphillips  
Filed under Around The Net

The U.S. and the European Union have reportedly come to an agreement on the language of a key data transfer pact, including limits on U.S. surveillance.

The revamped EU-U.S. Privacy Shield was sent to EU member states overnight, according to a report from Reuters. Privacy Shield would govern how multinational companies handle the private data of EU residents.

Member states are expected to vote on the proposal in July, unnamed sources told Reuters. Representatives of the EU and the U.S. Department of Commerce didn’t immediately respond to requests for comments on the reported deal.

Critics of Privacy Shield, including European privacy regulators, have said the deal is too complex and fails to reflect key privacy principles.

The new language sent to member states includes stricter data-handling rules for companies holding Europeans’ information, Reuters reported. The new proposal also has the U.S. government explaining the conditions when they would collect data in bulk, according to the report.

Negotiators on both sides of the Atlantic have been rushing to craft a new trans-Atlantic data transfer agreement since the Court of Justice of the European Union struck down Safe Harbor, the previous transfer pact, last October.

The court ruled that Safe Harbor didn’t adequately protect European citizens’ personal information from massive and indiscriminate surveillance by U.S. authorities. Safe Harbor had been in place since 2000.

 

 

IBM Going After Chinese Supercomputer

June 28, 2016 by Michael  
Filed under Computing

The US is clearly embarrassed the Chinese Sunway TiahuLight system is leading the supercomputer arms race. Now the Department of Energy’s (DOE) Oak Ridge National Laboratory has announced that is having a new IBM system, named Summit, delivered in early 2018 that will now be capable of 200 peak petaflops.

That would make it almost twice as fast as TaihuLight.  The Summit will be based around IBM Power9 and Nvidia Volta GPUs. Summit use only about 3,400 nodes. Each node will have “over half a terabyte” of coherent memory (HBM + DDR4), plus 800GB of non-volatile RAM that serves as a burst buffer or extended memory.

IBM are not the only ones worried about the Chinese getting ahead on speed. Cray announced this week  its Cray XC systems are now available with the latest Intel Xeon Phi (Knights Landing) processors.

The company said the new XC systems, which feature an adaptive design that supports multiple processor and storage technologies in the same architecture, deliver a 100 per cent performance boost over prior generations. Cray also unveiled the Sonexion 3000 Lustre storage system, which can deliver speeds of almost 100GB/sec in a single rack.  These should be rather good at number crunching too.

Courtesy-Fud

 

Intel And Nokia Joining Forces

June 28, 2016 by Michael  
Filed under Around The Net

Nokia is teaming up with Intel to make its carrier-grade AirFrame Data Center Solution hardware available for an Open Platform Network Functions Virtualization (OPNFV) Lab.

Basically this means that the hardware can be used by the OPNFV collaborative open source community to accelerate the delivery of cloud-enabled networks and applications.

Nokia said the OPNFV Lab will be a testbed for NFV developers and accelerates the introduction of commercial open source NFV products and services. Developers can test carrier-grade NFV applications for performance and availability.

Nokia is making its AirFrame Data Center Solution available as a public OPNFV Lab with the support of Intel, which is providing Intel Xeon processors and solid state drives to give communications service providers the advantage of testing OPNFV projects on the latest and greatest server and storage technologies.

The Nokia AirFrame Data Center Solution is 5G-ready and Nokia said it was the first to combine the benefits of cloud computing technologies to meet the stringent requirements of the telco world. It’s capable of delivering ultra-low latency and supporting the kinds of massive data processing requirements that will be required in 5G.

Morgan Richomme, NFV network architect for Innovative Services at Orange Labs, OPNFV Functest PTL, in a release. “NFV interoperability testing is challenging, so the more labs we have, the better it will be collectively for the industry.”

AT&T has officially added Nokia to its list of 5G lab partners working to define 5G features and capabilities. It’s also working with Intel and Ericsson.

Courtesy-Fud

 

AMD Goes 32 Cores Zeplin

June 27, 2016 by Michael  
Filed under Computing

A few months back Nick wrote about AMD Zen processor  found in a Linux Kernel Mailing List confirming that Zeppelin had  support for eight bundles of four cores on a single chip, or 32 physical processing cores.

This tied in with a story written in August of 2015 about a MCM Multi Chip module that featured a Zeppelin core, a super-fast 100GB/s interconnection via 4 GMI links and Greenland (Vega) high performance GPU with 4+ TFlops of performance. This APU will still happen, it will just be a bit later – the end of 2017.

Now we have a few more details about Zeppelin cluster and this is proving to be another “Fudzilla told you so” moment.  Apparently you can put up to four Zeppelin CPU clusters on a one chip and make a 32 core chip. This will be connected via coherent interconnect (coherent data fabric).

Each Zeppelin module has eight Zen cores and each Zen core has 512 KB of L2 cache. Four Zen cores share 8MB or L3 cache making the total amount of L3 cache per Zeppelin cluster 16 MB.

Each Zeppelin cluster will have PCIe Gen 3, SATA 3, and a 10GbE network connection. A server version of the chip has the server controller hub, DDR4 memory controller and AMD secure processors.

AMD will have at least three pin compatible versions of the next generation Opteron using Zeppelin cluster of Zen cores. There will be a 8 core versions with single Zeppelin cluster, dual Zeppelin cluster version and a quad Zeppelin version, that one that we have called Naples which will have 64MB L3 cache. All this sounds rather a lot.

We are expecting to see Zen-based Opterons in eight, sixteen and thirty two core versions for servers in 2017.

Courtesy-Fud

 

Twitter Acquires Machine Learning Company Magic Pony

June 22, 2016 by mphillips  
Filed under Around The Net

Twitter has been quite vocal regarding its interest in machine learning in recent years, and earlier this week the company put its money where its mouth is once again by purchasing London startup Magic Pony Technology, which has focused on visual processing.

“Magic Pony’s technology — based on research by the team to create algorithms that can understand the features of imagery — will be used to enhance our strength in live [streaming] and video and opens up a whole lot of exciting creative possibilities for Twitter,” Twitter cofounder and CEO Jack Dorsey wrote in a blog post announcing the news.

The startup’s team includes 11 Ph.Ds with expertise across computer vision, machine learning, high-performance computing and computational neuroscience, Dorsey said. They’ll join Twitter’s Cortex group, made up of engineers, data scientists and machine-learning researchers.

Terms of the deal were not disclosed.

The acquisition follows several related purchases by the social media giant, including Madbits in 2014 and Whetlab last year.

 

 

 

Samsung Pivoting Towards 5G Mobile Technology

June 22, 2016 by mphillips  
Filed under Mobile

Trailing its competitors after past mistakes on wireless technology standards, Samsung Electronics Co Ltd aims to become a global top-three player in 5G mobile networks by moving quickly in markets like the United States, an executive said.

The world’s top smartphone maker ranks well behind peers such as Nokia Corp, Huawei Technologies Co Ltd  and Ericsson in the networks business, after backing CDMA and WiMax wireless technologies that never caught on globally.

The South Korean giant now sees an opportunity to catch up by moving fast and early on 5G, the wireless technology that telecom equipment makers are rushing to develop as the next-generation standard.

“We plan to move quickly and want to be at least among the top three with 5G,” Kim Young-ky, Samsung’s network business chief, told Reuters in an interview.

“It’s important to get in early.”

5G wireless networks could offer data speeds tens of times faster than 4G technology, enabling futuristic products such as self-driving cars and smart-gadgets that tech firms expect to become ubiquitous in the homes of tomorrow.

Major network firms are targeting the United States as it moves rapidly ahead with plans to open spectrum for 5G wireless applications. Some U.S. officials expect to see the first large-scale commercial deployments by 2020.

Samsung is targetting more than 10 trillion won ($8.6 billion) in annual sales of 5G equipment by 2022, a spokeswoman said.

This would be a big step up for a networks business that generated less than 3 trillion won in revenue last year, compared with 100.5 trillion won in mobile device sales.

Crucial to its plans is a partnership with New York-based Verizon Communications Inc to commercialize the technology. Other firms working with Verizon on 5G include Nokia, Ericsson, Qualcomm and Intel Corp.

Verizon conducts field tests this year and aims to begin deploying 5G trials on home broadband services in 2017 in the United States, likely the first 5G application commercially available before a broader mobile network standard is agreed.

Samsung – which was a distant fifth player in the global 4G infrastructure market in January-March, according to researcher His – declined to comment on what clients it expected to receive 5G equipment orders from.

 

 

 

 

Dell Close To Deal To Sell Software Business

June 21, 2016 by mphillips  
Filed under Computing

Buyout firm Francisco Partners and the private equity arm of activist hedge fund Elliott Management Corp are in advanced negotiations to purchase Dell Inc’s software division for more than $2 billion, three people familiar with the matter said.

Divesting the software assets will help Dell refocus its technology portfolio and bolster its balance sheet after it agreed in October to buy data storage company EMC Corp for $67 billion. EMC owns a controlling stake in VMware Inc, a cloud-based virtualization software company.

Dell is seeking to sell almost all of its software assets, including Quest Software, which helps with information technology management, as well as SonicWall, an e-mail encryption and data security provider, the people said.

Boomi, a smaller asset focusing on cloud-based software integration, will be retained by Dell, one of the people added.

An agreement between Dell and the consortium of Francisco Partners and Elliott could be reached as early as this week, the people said, cautioning that the negotiations could still end unsuccessfully.

The sources asked not to be identified because the negotiations are confidential. Dell declined to comment, while Francisco Partners and Elliott did not immediately respond to requests for comment.

A sale of Dell’s software division would free it from some of its least profitable assets and cap the program of divestitures that the Round Rock, Texas-based computer maker embarked on following its deal with EMC. EMC shareholders are due to vote on the deal with Dell on July 19.

While Elliott has sought to buy companies in the past as part of its shareholder activist campaigns, the Dell software deal would represent its first major private equity investment since it hired Isaac Kim, previously a principal at private equity firm Golden Gate Capital, last year to help expand its capacity in leveraged buyouts.

Francisco Partners focuses on private equity investments in the technology sector. It has raised about $10 billion in capital and invested in more than 150 technology companies since it was launched more than 15 years ago.