Subscribe to:

Subscribe to :: ::

Will RISC-V Finally Hit Linux Next Year

October 16, 2017 by  
Filed under Computing

Linux fanboys tend to announce a lot of “year of” events. There is the year of the desktop which appears to be every year and still never happens and now there is the year of RISC V Linux processor.

SiFive has declared that 2018 will be the year of RISC V Linux processor, so mark your penguin diaries accordingly.  In the UK there will be all sorts of events planned, including guess the weight of Linus Torvalds competitions, there will be penguin tossing at Slough, The over 80s Linux nudist club will be holding a bring and buy sale and there will be the open sauce bob sleigh event down the escalators of Covent Garden tube station.

SiFive released its first open-source system on a chip, the Freeform Everywhere 310, last year. At the time it said it was aiming to push the RISC-V architecture to transform the hardware industry in the way that Linux transformed the software industry.

This year it released its U54-MC Coreplex, the first RISC-V-based chip that supports Linux, Unix, and FreeBSD. This latest opens up a whole new world of use cases for the architecture and paves the way for RISC-V processors to compete with ARM cores and similar offerings in the enterprise and consumer space.

The outfit claims that next year companies looking to build SoC’s around RISC-V will throng to the new developments.

Andrew Waterman co-founder and chief engineer at SiFive said the forthcoming silicon is going to enable much better software development for RISC-V.

Waterman said that, while SiFive had developed low-level software such as compilers for RISC-V the company really hopes that the open-source community will be taking a much broader role going forward and really pushing the technology forward.

“No matter how big of a role we would want to have we can’t make a dent. But what we can do is make sure the army of engineers out there are empowered.”


Is Open Source Winning

July 17, 2017 by  
Filed under Around The Net

Going way back, pretty much all software was effectively open source. That’s because it was the preserve of a small number of scientists and engineers who shared and adapted each other’s code (or punch cards) to suit their particular area of research. Later, when computing left the lab for the business, commercial powerhouses such as IBM, DEC and Hewlett-Packard sought to lock in their IP by making software proprietary and charging a hefty license fee for its use.

The precedent was set and up until five years ago, generally speaking, that was the way things went. Proprietary software ruled the roost and even in the enlightened environs of the INQUIRER office mention of open source was invariably accompanied by jibes about sandals and stripy tanktops, basement-dwelling geeks and hairy hippies. But now the hippies are wearing suits, open source is the default choice of business and even the arch nemesis Microsoft has declared its undying love for collaborative coding.

But how did we get to here from there? Join INQ as we take a trip along the open source timeline, stopping off at points of interest on the way, and consulting a few folks whose lives or careers were changed by open source software.

The GNU project
The GNU Project (for GNU’s not Unix – a typically in-jokey open source monicker, it’s recursive don’t you know?)  was created by archetypal hairy coder and the man widely regarded as the father of open source Richard Stallman in 1983. GNU aimed to replace the proprietary UNIX operating system with one composed entirely of free software – meaning code that could be used or adapted without having to seek permission.

Stallman also started the Free Software Foundation to support coders, litigate against those such as Cisco who broke the license terms and defend open-source projects against attack from commercial vendors. And in his spare time, Stallman also wrote the GNU General Public License (GNU GPL), a “copyleft” license, which means that derivative work can only be distributed under the same license terms –  in 1989. Now on its third iteration GPLv3, it remains the most popular way of licensing open source software. Under the terms of the GPL, code may be used for any purpose, including commercial uses, and even as a tool for creating proprietary software.

Pretty Good Privacy (PGP) encryption was created in 1991 by anti-nuclear activist Phil Zimmerman, who was rightly concerned about the security of online bulletin boards where he conversed with fellow protesters. Zimmerman decided to give his invention out for free. Unfortunately for him, it was deployed outside of his native USA, a fact that nearly landed him with a prison sentence, digital encryption being classed as a munition and therefore subject to export regulations. However, the ever-resourceful Mr Zimmerman challenged the case against him by reproducing his source code in the form of a decidedly-undigital hardback book which users could scan using OCR. Common sense eventually won the day and PGP now underpins much modern communications technology including chat, email and VPNs.

“PGP represents the democratisation of privacy,” commented Anzen Data CIO and developer of security software, Gary Mawdsley.

In 1991 Finnish student and misanthrope Linus Torvalds created a Unix-like kernel based on some educational operating system software called MINIX as a hobby project. He opened up his project so that others could comment. And from that tiny egg, a mighty penguin grew.

Certainly, he could never have never anticipated being elevated to the position of open-source Messiah. Unlike Stallman, Torvalds, who has said many times that he’s not a “people person” or a natural collaborator (indeed recent comments have made him seem more like a dictator – albeit a benevolent one), was not driven by a vision or an ideology. Making Linux open source was almost an accident.

“I did not start Linux as a collaborative project, I started it for myself,” Torvalds said in a TED talk. “I needed the end result but I also enjoyed programming. I made it publicly available but I had no intention to use the open-source methodology, I just wanted to have comments on the work.”

Nevertheless, like Stallman, the Torvalds name is pretty much synonymous with open source and Linux quickly became the server operating system of choice, also providing the basis of Google’s Android and Chrome OS.

“Linux was and is an absolute game-changer,” says Chris Cooper of compliance software firm KnowNow. “It was the first real evidence that open could be as good as paid for software and it was the death knell of the OS having a value that IT teams would fight over. It also meant that the OS was no longer a key driver of architectural decisions: the application layer is where the computing investment is now made.”

Red Hat
Red Hat, established in 1995, was among the first proper enterprise open source companies. Red Hat went public in 1999 with a highly successful IPO. Because it was willing to bet big on the success of open source at a time when others were not, Red Hat is the most financially buoyant open source vendor, achieving a turnover of $1bn 13 years later. Red Hat’s business model revolves around offering services and certification around its own Linux distribution plus middleware and other open source enterprise software.

“Red Hat became successful by making open source stable, reliable and secure for the enterprise,” said Jan Wildeboer, open source affairs evangelist at the firm.



Jide Technology Brings Android To The Desktop

January 18, 2016 by  
Filed under Computing

Jide Technology has released an Alpha build of its much praised Remix OS version of Android, available free of charge.

The Android fork, which adds conventional desktop features such as a taskbar, start menu and support for multiple windows, has been a huge hit, overshadowing the implementation of Android revealed in Google’s recent high-end tablet the Pixel C.

The initial build, as ever, is designed to fish for bugs and aid developers. A beta will follow in the coming weeks. The Alpha doesn’t contain Google Mobile Services apps such as the Play store and Gmail, but the finished version will. In the meantime, users can sideload the gApps package or go to the Amazon Web Store.

There may also be problems with some video codecs, but we’re told this is a licensing issue which will be resolved in the final version too. In the meantime, the first release is perfectly useable.

Compatibility with most Android apps is instant, but the user community can ‘upvote’ their favourites on the Remix OS site to flag what’s working best in each category.

The company has already released a small desktop machine of its own, called the Remix Mini, the world’s first fully functioning Android PC, priced at just $70 after a successful Kickstarter campaign. It has also developed a 2-in-1 ultrabook, the Remix Ultra, and has licensed Remix OS to several Far East tablet manufacturers.

In this new move, the company has teamed up with Android-x86, a group that has been working on an executable version of Android for computers since 2009, to launch a Remix OS installer which will allow existing hardware to become Remix OS powered, or as a partition on a dual-boot machine.

A third option is to store the OS on a USB stick, meaning that you can make any computer your own. This technique has already been popular through the Keepod programme which offers Android on a stick to countries without access to high-speed computers.

The advantages of Remix OS to the developing world are significant. Bench tests have shown that Remix OS works significantly faster than Windows, which will potentially breathe new life into older machines and make modern machines run at previously impossible speeds.

Remix OS was designed by three ex-Google engineers and includes access to the full Google Apps suite and the Google Play store.

David Ko, co-founder of Jide Technology, said: “Today’s public release of Remix OS, based on Android-x86, is something that we’ve been working towards since we founded Jide Technology in 2014.

“All of us are driven by the goal of making computing a more accessible experience, and this free, public release allows us to do this. We believe Remix OS is the natural evolution of Android and we’re proud to be at the forefront of this change.”

The public Alpha will be available to download from Jide and android-x86 from 12 January, and a beta update is expected swiftly afterwards. The INQUIRER has been using a Remix Mini for over a month now, and a full review of the operating system is coming soon.



Linux Updated To Offer Better Support For Intel’s Skylake

January 14, 2016 by  
Filed under Computing

Linus Torvalds has confirmed the release of Linux Kernel 4.4 right on schedule.

The release has gone ahead as planned, despite some problems in mid-December. Linux kernel releases are based around a schedule rather than any specific features, but that hasn’t stopped a number of big additions to the code base provided by the community.

Perhaps the biggest news is improved support for cutting-edge chipsets including Intel’s Skylake, ARM’s 64-bit processors and Qualcomm’s Snapdragon 820.

In graphics, there’s support for AMD Stoney, and GPU additions for AMD’s Carrizo, Tonga and Fiji processors.

Raspberry Pi users can benefit from the beginnings of a KMS driver courtesy of Broadcom. This version is for kernel mode-setting only, so as yet there’s no support for 3D hardware acceleration or power management.

Other 3D support is available in the new version which will be used to provide OpenGL acceleration support for guest virtual machines using VirtIO, joining VMWare and Virtualbox as options.

As ever, the community is quite often hampered by lack of source information from the chip manufacturers, and as Michael Larabel notes on Phoronix, this is hindering work on support for several Nvidia chips.

Torvalds, never backward in coming forward, managed to take a little dig at one of Linux’s distant cousins.

“The changes since RC8 aren’t big. There’s about one third arch updates, one third drivers and one-third ‘misc’ (mainly some core kernel and networking), but it’s all small,” he said.

“Notable might be unbreaking the x86-32 ‘sysenter’ ABI, when somebody [*cough* android-x86 *cough*] misused it by not using the VDSO and instead using the instruction directly.”

Other noteworthy additions include updates for ARM SoCs, and improvements to ARM UEFI 2.5.

Network additions include support for the latest Realtek driver and for persistent maps and programs with eBPF.

Other products getting support include Intel Lewisburg sound, putting work ahead of schedule for release of Intel’s Purley platform later in the year. There’s better support for Skylake Windows 8 touchpads, the Corsair Vengeance K90 keyboard drive and the Logitech G29 racing wheel.

Google Fibre TV’s remote control gets a boost, Toshiba Laptops should play nicer and there’s better compatibility for the Chrome Pixel 2016.

We could go on and on (there are over 2,400 bits of rather techie changes) but instead we reckon you’re better off getting stuck in and finding them for yourself. After all, that’s what Linux is about.

Time waits for no contributor. The window for Linux 4.5 is now open.



The Linux Foundation Goes To Intel To Accelerate Development

October 7, 2015 by  
Filed under Computing

Jim Zemlin, chief executive of the Foundation, said in his opening remarks that this year’s opening day falls on the 24th anniversary of Linux itself and the 30th of the Free Software Foundation, giving credit to delegates for their part in the success of both.

He also noted that research conducted into the value of the Linux codebase has shown that in the past few years the code has been worth over $5bn.

As part of the launch he also made three key announcements. Firstly, a workgroup is being created to standardise the future of the software supply chain. The Openchain workgroup is centred on creating best practices to ease compliance for open source developers and companies.

In doing so it is hoped that cost and duplication of effort can be reduced significantly, and in doing so ease friction points in the supply chain. The workgroup’s founder members include ARM, Cisco, NexB, Qualcomm, SanDisk and Wind River.

By providing a baseline process, which can then be customised according to customer need, Linux developers will have a basis for monitoring and developing compliance programmes.

Existing best practices such as Debian and the Software Package Data Exchange will be used as foundations for the framework.

The second announcement involves an acceleration to the process of real-time Linux development. the Real-Time Linux Collaborative Project will bring together industry leaders and thinkers to advance the type of tech that is crucial for areas such as robotics, telecom, manufacturing, aviation and medical industries.

Two of this morning’s keynotes centred around the ideas of real-time Linux. Sean Gauley, founder of big data analysts Quid, talked about the $300m spent on a new London to New York undersea cable to cut just five milliseconds off data speed, coupled with the seven minutes of downtime the New York Stock Exchange has to suffer while humans crunch the impact of a Treasury announcement.

The Real-Time Linux Collaborative Project brings together organisations as diverse as Google, Texas Instruments, Intel, ARM and Altera.

Thomas Gleixner of the Open Source Automation Development Lab has been made a Linux Foundation fellow in order to lead the process of integrating real-time code into the main Linux kernel, which Zemlin joked would be finished within six months.

In reality this is a long-term goal, albeit a highly achievable one that could revolutionise a number of key industries.

Finally, FOSSology, the open source licence compliance software project and toolkit founded by HP in 2007, is moving home to become part of the Linux Foundation. With it comes FOSSology 3.0, due for release this week.

“As Linux and open source have become the primary building blocks for creating today’s most innovative technologies, projects like FOSSology are more relevant than ever,” said Zemlin.

“FOSSology’s proven track record for improving efficiency in licence compliance is the perfect complement to a suite of open compliance initiatives hosted at the Linux Foundation. This work is among the most important that we all do.”

FOSSology allows companies to run licence and copyright scans in a single click, and generate a Software Package Data Exchange, or readme file.

By moving the project to the Linux Foundation, the toolkit is kept in neutral hands alongside other initiatives such as the Core Infrastructure Initiative, the Open Container Project and Dronecode.

Dronecode’s Loenz Meier spoke alongside Tully Foote of the Open Source Robotics Foundation about their quest to “take back” the term ‘drone’ from its negative military connotations.

The team, whose work in Switzerland dates back to “when they were still called model aircraft”, included information about Mavlink, the self-styled ‘HTML for drones’, and Robot Operating System, a meta operating system for autonomous devices.

The team has been concentrating primarily on using telemetry data to allow drones to navigate around objects, in a similar way to that being achieved by Google’s self-driving cars.

LinuxCon Europe runs until Wednesday, bringing together representatives from back bedroom developers to giant corporations like Facebook, all sharing a common goal to nurture the community which approaches its quarter century primed to take over even more aspects of our everyday lives – quiet, unassuming but always there.

Speakers this year include people from Suse, Red Hat, Google, Raspberry Pi and the godfather of Linux, Linus Torvalds.

The INQUIRER will be talking tomorrow to some top bods from the Linux community. So early to bed for us tonight and absolutely no Guinness.


IBM Will Use Apache Spark To Find E.T.

October 2, 2015 by  
Filed under Computing

IBM is using Apache Sparke to analyse radio signals for signs of extra-terrestrial intelligence.

Speaking at Apache: Big Data Europe, Anjul Bhambrhi, vice president of big data products at IBM, talked about how the firm has thrown its weight behind Spark.

“We think of [Spark] as the analytics operating system. Never before have so many capabilities come together on one platform,” Bhambrhi said.

Spark is a key project because of its speed and ease of use, and because it integrates seamlessly with other open-source components, Bhambrhi explained.

“Spark is speeding up even MapReduce jobs, even though they are batch oriented by two to six times. It’s making developers more productive, enabling them to build applications in less time and with fewer lines of code,” she claimed.

She revealed IBM is working with Nasa and Seti to analyse radio signals for signs of extra-terrestrial intelligence, using Spark to process the 60Gbit of data generated per second by various receivers.

Other applications IBM is working on with Spark include genome sequencing for personalised medicine via the Adam project at UC Berkeley in California, and early detection of conditions such as diabetes by analysing patient medical data.

“At IBM, we are certainly sold on Spark. It forms part of our big data stack, but most importantly we are contributing to the community by enhancing it,” Bhambrhi said.

The Apache: Big Data Europe conference also saw Canonical founder Mark Shuttleworth outline some of the key problems in starting a big data project, such as simply finding engineers with the skills needed just to build the infrastructure for operating tools such as Hadoop.

“Analytics and machine learning are the next big thing, but the problem is there are just not enough ‘unicorns’, the mythical technologists who know everything about everything,” he explained in his keynote address, adding that the blocker is often just getting the supporting infrastructure up and running.

Shuttleworth, pictured above, went on to demonstrate how the Juju service orchestration tool developed by Canonical could solve this problem. Juju enables users to describe the end configuration they want, and will automatically provision the servers and software and configure them as required.

This could be seen as a pitch for Juju, but Shuttleworth’s message was that the open-source community is delivering tools that can manage the underlying infrastructure so that users can focus on the application itself.

“The value creators are the guys around the outside who take the big data store and do something useful with it,” he said.

“Juju enables them to start thinking about the things they need for themselves and their customers in a tractable way, so they don’t need to go looking for those unicorns.”

The Apache community is working on a broad range of projects, many of which are focused on specific big data problems, such as Flume for handling large volumes of log data or Flink, another processing engine that, like Spark, is designed to replace MapReduce in Hadoop deployments.



The Linux Foundation Donates To Open Source Security

June 24, 2015 by  
Filed under Computing

The Linux Foundations Core Infrastructure Initiative (CII) has announced a $500,000 investment in three projects designed to improve the open source technology’s security and services.

The project will fund the ReproducibleBuilds, Fuzzing Project and False­Positive­Free Testing initiatives.

The $200,000 ReproducibleBuilds funding aims to help Debian developers Holger Levsen and Jérémy Bobbio’s attempts to improve the Debian and Fedora operating systems’ security by letting developers independently verify the authenticity of binary distributions.

The feature will help people working on the systems to avoid introducing flaws during the build process and reduce unneeded variations in distribution code.

The $60,000 Fuzzing Project investment will aid security researcher Hanno Böck’s efforts to coordinate and improve the fuzzing software testing technique that identifies security problems in software or computer systems.

It has been used successfully to find flaws in high-profile technologies including GnuPG and OpenSSL.

The final $192,000 False­Positive­Free Testing funding will go to Pascal Cuoq, chief scientist and co-­founder of TrustInSoft, in his attempts to build an open source TIS Interpreter that will reduce false positive TIS Analyser threat detections.

The overall funding will be overseen by Linux security expert Emily Ratliff, who expects the initiative to centralise the open source community’s security efforts.

“I’m excited to join the Linux Foundation and work on the CII because improving the security of critical open source infrastructure is a bigger problem than any one company can tackle on its own,” she said.

“I’m looking forward to working with CII members to more aggressively support underfunded projects and work to change the way the industry protects and fortifies open source software.”

The funding follows the discovery of several critical bugs in widely used open source technologies, one of the biggest of which was Heartbleed.

Heartbleed is a flaw in the OpenSSL implementation of the TLS protocol used by open source web servers such as Apache and Nginx, which host around 66 percent of all sites.

The funding is one of many initiatives launched by the Linux Foundation designed to stop future Heartbleed-level flaws. The Linux Foundation announced an open audit of openSSL’s security in March.


Apache Goes To Hadoop Clusters

June 30, 2014 by  
Filed under Computing

Apache Spark, a high-speed analytics engine for the Hadoop distributed processing framework, is now available to plug into the YARN resource management tool.

This development means that it can now be easily deployed along with other workloads on a Hadoop cluster, according to Hadoop specialist Hortonworks.

Released as version 1.0.0 at the end of May, Apache Spark is a high-speed engine for large-scale data processing, created with the aim of being much faster than Hadoop’s better-known MapReduce function, but for more specialised applications.

Hortonworks vice president of Corporate Strategy Shaun Connolly told The INQUIRER, “Spark is a memory-oriented system for doing machine learning and iterative analytics. It’s mostly used by data scientists and high-end analysts and statisticians, making it a sub-segment of Hadoop workloads but a very interesting one, nevertheless.”

As a relatively new addition to the Hadoop suite of tools, Spark is getting a lot of interest from developers using the Scala language to perform analysis on data in Hadoop for customer segmentation or other advanced analytics techniques such as clustering and classification of datasets, according to Connolly.

With Spark certified as YARN-ready, enterprise customers will be able to run memory and CPU-intensive Spark applications alongside other workloads on a Hadoop cluster, rather than having to deploy them in separate a cluster.

“Since Spark has requirements that are much heavier on memory and CPU, YARN-enabling it will ensure that the resources of a Spark user don’t dominate the cluster when SQL or MapReduce users are running their application,” Connolly explained.

Meanwhile, Hortonworks is also collaborating with Databricks, a firm founded by the creators of Apache Spark, in order to ensure that new tools and applications built on Spark are compatible with all implementations of it.

“We’re working to ensure that Apache Spark and its APIs and applications maintain a level of compatibility, so as we deliver Spark in our Hortonworks Data Platform, any applications will be able to run on ours as well as any other platform that includes the technology,” Connolly said.


Does Apache Need To Be Patched?

April 30, 2014 by  
Filed under Computing

Apache Software Foundation released an advisory warning that a patch issued in March for a zero-day vulnerability in Apache Struts did not fully patch the bug. Apparently, the patch for the patch is in development and will be released likely within the next 72 hours.

Rene Gielen of the Apache Struts team said that once the release is available, all Struts 2 users are strongly recommended to update their installations. ASF provided a temporary mitigation that users are urged to apply. On March 2, a patch was made available for a ClassLoader vulnerability in Struts up to version All it took was an attacker to manipulate the ClassLoader via request parameters. However Apache admitted that its fix was insufficient to repair the vulnerability. An attacker exploiting the vulnerability could also cause a denial-of-service condition on a server running Struts 2.

“The default upload mechanism in Apache Struts 2 is based on Commons FileUpload version 1.3 which is vulnerable and allows DoS attacks. Additional ParametersInterceptor allows access to ‘class’ parameter which is directly mapped to getClass() method and allows ClassLoader manipulation.”

It will be the third time that Struts has been updated this year. In February, the Apache Struts team urged developers to upgrade Struts 2-based projects to use a patched version of the Commons FileUpload library to prevent denial-of-service attacks.



Intel Goes Apache Hadoop

February 28, 2013 by  
Filed under Computing

Intel has released its Apache Hadoop distribution, claiming significant performance benefits through its hardware and software optimisation.

Intel’s push into the datacentre has largely been visible with its Xeon chips but the firm works pretty hard on software as well, including contributing to open source projects such as the Linux kernel and Apache’s Hadoop to ensure that its chips win benchmark tests.

Now Intel has released its Apache Hadoop distribution, the third major revision of its work on Hadoop, citing significant performance benefits and claiming it will open source much of its work and push it back upstream into the Hadoop project.

According to Intel, most of the work it has done in its Hadoop distribution is open source, however the firm said it will retain the source code for the Intel Manager for Apache Hadoop, the cluster management part of the distribution. Intel said it will use this to offer support services to datacentres that deploy large Hadoop clusters.

Boyd Davis, VP and GM of Intel’s Datacentre Software Division said, “People and machines are producing valuable information that could enrich our lives in so many ways, from pinpoint accuracy in predicting severe weather to developing customised treatments for terminal diseases. Intel is committed to contributing its enhancements made to use all of the computing horsepower available to the open source community to provide the industry with a better foundation from which it can push the limits of innovation and realise the transformational opportunity of big data.”

Intel trotted out some impressive industry partners that it has been working with on the Hadoop distribution and while the firm’s direct income from the Hadoop distribution will come from support services, the indirect income from Xeon chip sales is likely what Intel is most looking towards as Hadoop adoption grows to manage the extremely large data sets that the industry calls “big data”.


Dell Links Up With The Apache Foundation

October 26, 2012 by  
Filed under Computing

Dell is offering access to its Zinc ARM based server to the Apache Software Foundation for development and testing purposes.

Dell had already shown off its Copper ARM based server earlier this year and said it intends to bring ARM servers to market “at the appropriate time”. Now the firm has allowed the Apache Software Foundation access to another Calxeda ARM based server codenamed Zinc.

Dell’s decision to give the Apache Software Foundation access to the hardware is not surprising as it is the organisation that oversees development of the popular Apache HTTPD, Hadoop and Cassandra software products, all applications that are widely regarded as perfect for ARM based servers. The firm said its Zinc server is accessible to all Apache projects for the development and porting of applications.

Forrest Norrod, VP and GM of Server Solutions at Dell said, “With this donation, Dell is further working hand-in-hand with the community to enable development and testing of workloads for leading-edge hyperscale environments. We recognize the market potential for ARM servers, and with our experience and understanding of the market, are enabling developers with systems and access as the ARM server market matures.”

Dell didn’t give any technical details on its Zinc server and said it won’t be generally available. However the firm reiterated its goal of bringing ARM based servers to the market, though given that it is trying to help the Apache Foundation, a good indicator of ARM server viability will be when the Apache web server project has been ported to the ARM architecture and has matured to production status.




IBM Goes After Apache’s Tomcat

May 3, 2012 by  
Filed under Computing

Java Developers looking for a mobile-friendly platform could be happy with the next release of IBM’s Websphere Application Server, which is aimed at offering a lighter, more dynamic version of the app middleware.

Shown off at the IBM Impact show in Las Vegas on Tuesday, Websphere Application Server 8.5, codenamed Liberty, has a footprint of just 50MB. This makes it small enough to run on machines such as the Raspberry Pi, according to Marie Wieck, GM for IBM Application and Infrastructure Middleware.

Updates and bug fixes can also be done on the fly with no need to take down the server, she added.

The Liberty release will be launched this quarter, and already has 6,000 beta users, according to Wieck.

John Rymer of Forrester said that the compact and dynamic nature of the new version of Websphere Application Server could make it a tempting proposition for Java developers.

“If you want to install version seven or eight, it’s a big piece of software requiring a lot of space and memory. The installation and configuration is also tricky,” he explained.

“Java developers working in the cloud and on mobile were moving towards something like Apache Tomcat. It’s very light, starts up quickly and you can add applications without having to take the system down. IBM didn’t have anything to respond to that, and that’s what Liberty is.”

For firms needing to update applications three times a year, for example, the dynamic capability of Liberty will make it a much easier process.

“If developers want to run Java on a mobile device, this is good,” Rymer added.

The new features are also backwards compatible, meaning current Websphere users will be able to take advantage of the improvements.

However, IBM could still have difficulty competing in the app server space on a standalone basis, according to Rymer.

“Red Hat JBoss costs considerably less, and there’s been an erosion for IBM as it’s lost customers to Red Hat and Apache. Liberty might have an effect here,” he said.

“But IBM wins where the customer isn’t just focused on one product. It will never compete on price, but emphasises the broader values of a platform or environment.”

IBM will be demoing Websphere running on Raspberry Pi at Impact today.




Apache Finally Goes To The Cloud

January 5, 2012 by  
Filed under Computing

The Apache Software Foundation (ASF) has announced Hadoop 1.0.

The open source software project has reached the milestone of its first full release after six years of development. Hadoop is a software framework for reliable, scalable and distributed computing under a free licence. Apache describes it as “a foundation of cloud computing”.

“This release is the culmination of a lot of hard work and cooperation from a vibrant Apache community group of dedicated software developers and committers that has brought new levels of stability and production expertise to the Hadoop project,” said Arun Murthy, VP of Apache Hadoop.

“Hadoop is becoming the de facto data platform that enables organizations to store, process and query vast torrents of data, and the new release represents an important step forward in performance, stability and security,” he added.

Apache Hadoop allows for the distributed processing of large data sets, often Petabytes, across clusters of computers using a simple programming model.

The Hadoop framework is used by some big name organisations including Amazon, Ebay, IBM, Apple, Facebook and Yahoo.

Yahoo has significantly contributed to the project and hosts the largest Hadoop production environment with more than 42,000 nodes.

Jay Rossiter, SVP of the cloud platform group at Yahoo said, “Apache Hadoop will continue to be an important area of investment for Yahoo. Today Hadoop powers every click at Yahoo, helping to deliver personalized content and experiences to more than 700 million consumers worldwide.”



New Google Tool Makes Websites Twice as Fast

November 9, 2010 by  
Filed under Around The Net

Google wants to make the Web faster. As well as optimizing its own sites and services to run at blazing speed, the company has been helping to streamline the rest of the Web, too. Now Google has released free software that could make many sites load twice as fast.

The software, called mod_pagespeed, can be installed and configured on Apache Web servers, the most commonly used software for running websites. Once installed, mod_pagespeed determines ways to optimize a site’s performance on the fly. For example, it will compress images more efficiently and change settings so that more of the pages are stored in a user’s browser cache, so that the same data doesn’t have to be loaded repeatedly. The software will be automatically updated, notes Richard Rabbat, product manager for the new project. He says that this means that as Google and others make improvements, people who install it will benefit without having to make any changes.

“We think making the whole Web faster is critical to Google’s success,” says Rabbat. Making the Web faster should encourage people to use it more and increase the likelihood that they will use Google’s services and software. Rabbat points to the frustration that people feel when they click a link or type a URL and see a blank page for several seconds. “In many cases,” he says, “I’ll navigate away when that happens.”

Google already offers a tool called Page Speed that measures the speed at which a website loads and suggests ways to make improvements. “We asked ourselves, instead of just telling people what the problems are, can we just fix it for them automatically?” Rabbat says.

The software could be particularly useful to operators of small websites. Such people may not have the skill or time to optimize their site’s performance themselves. It should also be useful for companies that use content management systems to operate their websites and lack the technical capabilities needed to make speed improvements to Web server software themselves.

Google tested mod_pagespeed on a representative sample of websites and found that it made some sites load three times faster, depending on how much optimization had already been done.

Speeding up the Web has a clear financial payoff for Google. “If websites are faster, Google makes more money,” says Ed Robinson, CEO of Aptimize, a startup that also provides software that automatically optimizes Web pages, much as Google’s new offering does. Robinson explains that the faster a website is, the more pages users will view, and the more ads Google can serve—on its search pages or through its ad networks. Because the company’s reach is so wide, even small improvements can add up to massive revenue gains for the Web giant. He adds, “Making the Web faster is the logical next step for moving the Web forward.”