IBM’s new Power8 chip technology unveiled

IBM Unveils Power8 Chip As Open Hardware. Google and other OpenPower Foundation partners express interest in IBM’s Power8 chip designs and server motherboard specs since Power8 has been designed with some specific big-data handling characteristics.It is, for example, an eight-threaded processor, meaning each of 12 cores in a CPU will coordinate the processing of eight sets of instructions at a time — a total of 96 processes. “processes” is to understood as a set of related instructions making up a discrete process within a program. By designating sections of an application that can run as a process and coordinate the results, a chip can accomplish more work than a single-threaded chip.

By licensing technology to partners, IBM is borrowing a tactic used by ARM in the market for chips used in smartphones and tablets. But the company faces an uphill battle.

More information:

http://openpowerfoundation.org/

http://bits.blogs.nytimes.com/

New Flashbook: DB2 10.5 with BLU Acceleration

A free ebook will be available for you to download in the coming days on this page

 

 

Just in time for IDUG, Paul Zikopoulos and his team of co-authors have created a new ebook for you to deepen your skills in regards to the latest release.  Here are some details about the flashbook:

Title:

DB2 10.5 with BLU Acceleration – New Dynamic In-Memory Analytics for the Era of Big Data

Authors:

Paul Zikopoulos, Matthew Huras, George Baklarz, Sam Lightstone, Aamer Sachedina

Technical editor: Roman B. Melnyk

Coverage includes:

  • Speed of Thought Analytics with new BLU Acceleration

  • Always Available Transactions with enhanced pureScale reliability

  • Unprecedented Affordability with optimization for SAP workloads

  • Future Proof Versatility with business grade NoSQL and mobile database for greater application flexibility

About the book:

If big data is an untapped natural resource, how can you find the gold dust hidden within?  Leaders realize that big data means all data, and are moving quickly to understand both structured and unstructured application data.  However, analyzing this data without impacting the performance and reliability of essential business applications can prove costly and complex.

In the new era of big data, businesses require data systems that can blend always available transactions with speed of thought analytics.  DB2 10.5 with new BLU Acceleration provides this speed, simplicity and cost efficiency while providing the ability to build next-generation applications with NoSQL features.

With this book, you’ll learn about the power and flexibility of multi-workload, multi-platform database software.  Use the comprehensive knowledge from this book to get started with the latest DB2 release by downloading the trial version.  Visit ibm.com/developerworks/downloads/im/db2/

IBM's BLU Acceleration aims to change Big Data

“In contrast to some competitors, the company believes Big Data isn’t some new issue requiring emerging or arcane technologies,” said analyst Charles King on IBM’s BLU Acceleration technologies. “Instead, IBM views Big Data as a fundamental challenge that stretches across the IT landscape, tangibly affecting the technology market as a whole.”

Three new products have just been rolled out by IBM, including technologies that promise 25 times faster reporting and analytics.

  1. A new acceleration technology “BLU Acceleration” targeting DB2 solutions.
  2. A new IBM PureData System for Hadoop. Hadoop is the game-changing open-source solution for big data.
  3. A new version of InfoSphere BigInsights, IBM’s enterprise-ready Hadoop offering, which makes it simpler to develop applications using existing SQL skills.

 

About “BLU Acceleration”:

  1. Extending In-Memory solution IBM said BLU Acceleration makes it possible for users to access key information more quickly, which, in turn, leads to better decision-making. That extension allows data to be loaded into RAM instead of hard disks for faster performance by providing in-memory performance even when data sets exceed the size of the memory.
  2. BLU Acceleration include “data skipping.” Data skipping allows the ability to skip over data that doesn’t need to be analyzed, such as duplicate information. Other innovations include the ability to analyze data in parallel across different processors and greater ability to analyze data transparently to the application, without the need to develop a separate layer of data modeling.
  3. Another industry-first advance in BLU Acceleration is called “actionable compression,” where data no longer has to be decompressed to be analyzed. During testing, some queries in a typical analytics workload were more than 1,000 times faster when using the combined innovations of BLU Acceleration.

 

 

2013 Big Data Predictions

IBM, at its annual investor briefing, revealed that the giant is increasing its revenue target for big data and analytics from $16 billion previously to $20 billion for its 2015 targets. Some of IBM’s strategic growth initiatives include – business analytics, Smarter Planet, cloud computing, and emerging growth markets. These are supposedly key drivers of growth for IBM’s 2015 goals.

Big Money For Big Data

Cost is a very important factor for several corporations. It is one of the biggest hurdles to carve out a big data strategy, depending on how much data we’re actually looking at. Big Data spending is all set to cross $25 billion in 2013 alone. Corporations in certain segments, especially those anticipating a much larger data volume, are keener on keeping data online (on private clouds) so that they comply with certain regulations. Enterprises are also looking at making the most of all the raw data they can get hold of from various sources to judge better analytics. But with hardware prices continuously falling, spending should also come down – depending on what platforms and services corporations select as a part of their big data strategy.

 

 

Big Companies Will Make Big, Big Data Acquisitions

Corporations like IBM and Oracle will not spend money buying services from other companies and instead will look into tapping the growing demand by acquiring smaller companies. Both Oracle and IBM would strive towards becoming one of the biggest big data service providers to companies worldwide.

 

Hadoop Alternatives May Rise

Apache’s Hadoop has gained significant popularity amongst the enterprise sector. Several banking institutions, communication firms and even retailers are using Hadoop as a part of their core big data strategy. However this year, things may slightly change for Apache. Corporations are now slowly adopting a stack of different open technologies like private clouds to easily manage data – as a combination of databases and data warehouse environments. Enterprises are keener on getting their solutions placed right into their strategy without causing any trouble to their existing IT placements.

"Taming big data" IBM's best practices for the care of big data

Infographics “Taming big data” provided by IBM.

Certain things cannot be overlooked when dealing with data. Best practices must be instituted for the care of big data just as they have long been in small data. Before enjoying big data’s amazing analytical feats, you must first get it under control – with tools that are up to the challenge of implementing best practices in a big data world.

  • availability
  • management
  • disaster recovery
  • provisioning
  • optimization
  • backup & restore
  • security
  • governance
  • auditing
  • replication
  • virtualization
  • archiving

IBM

IBM PureData System

Big data is the core of your new enterprise application architecture. In the broader evolutionary picture, analytics and transactions will share a common big data infrastructure, encompassing storage, processing, memory, networking and other resources. More often than not, these workloads will run on distinct performance-optimized integrated systems, but will interoperate through a common architectural backbone.

 

Deploying a big-data infrastructure that does justice to both analytic and transactional applications can be challenging, especially when you lack platforms that are optimized to handle each type of workload. But the situation is improving. A key milestone in the evolution of big data toward agile support for analytics-optimized transactions is today, October 9, 2012, with the release of IBM PureData System. This is a new family of workload-specific, hardware/software expert integrated systems for both analytics and transactions. IBM has launched workload-optimized new systems for transactions (IBM PureData System for Transactions), data warehousing and advanced analytics (IBM PureData System for Analytics), and real-time business intelligence, online analytical processing and text analytics (IBM PureData System for Operational Analytics).

What are the common design principles that all of the PureData System platforms embody, and which they share with other PureSystems solutions? They all incorporate the following core features:

  • Patterns of expertise for built-in solution best practices: PureData System incorporate integrated expertise patterns, which represent encapsulations of best practices drawn from the time-proven practical know-how of myriad data and analytics deployments. PureData System are built upon pre-defined, preconfigured, pre-optimized solution architectures. This enables them to support repeatable deployments of analytics and transactional computing with full lifecycle management, monitoring, security and so forth.
  • Scale-in, out and up capabilities: PureData System support both the “scale-out” and “scale-up” approaches to capacity growth, also known as “horizontal” and “vertical” scaling, respectively. They also incorporate “scale-in” architectures, which allow you to add workloads and boost performance within existing densely configured nodes. You can execute dynamic, unpredictable workloads with linear performance gains while making most efficient use of existing server capacity. And you can significantly scale your big data storage, application software and compute resources per square foot of precious data-center space.
  • Cloud-ready deployment: PureData System provide workload-optimized hardware/software nodes that are building blocks for big-data clouds. As repeatable nodes, they support cloud architectures that scale on all three “Vs” of the big data universe–volume, velocity and variety–and may be deployed into any high-level cloud topology (centralized, hub-and-spoke, federated, etc) either on your premises or in the data center of whatever cloud, hosting, or outsourcing vendor you choose.
  • Clean-slate designs for optimal performance: PureData System incorporate “clean-slate design” principles. These allow us to to optimize and innovate in the internal design of each new integrated solution, improving performance, scalability, resiliency and so forth without being constrained by the artifacts of older platforms. When we think about the insides of our boxes, we’re always thinking outside the box.
  • Integrated management for maximum administrator productivity: PureData System incorporate unified management tooling and expertise patterns to enable low lifecycle cost of ownership and high administrator productivity. The tooling automates and facilitates the work of human administrators overseeing a wide range of workload management, troubleshooting and administration tasks over the solutions’ useful lives. As workload-optimized systems, these solutions embed integrate expertise patterns that automate and optimize the work of human administrators.

Taken together, these principles enable the PureData platforms to realize fast business value, reduce total cost of ownership, and support maximum scalability and performance on a wide range of analytics and transactional workloads. These same principles are also the architectural backbone for the recently released IBM PureApplication Systems and IBM PureFlex Systems platforms.

 

Learn more about IBM PureData System

US Social Security was the mother of all BigData projects

Great post from IBM , celebrating this year’s Social Security’s 75th anniversary. It was exactly 75 years ago this month that IBM delivered to the U.S. government the machines that were credited with making the program possible.

During the Great Depression, President Franklin Delano Roosevelt conceived of Social Security as a program for senior citizens, the disabled, the unemployed, widows and orphans who lacked financial protection. However, when Roosevelt signed the Social Security Act into law in August, 1935, the document did not say how the details would play out.

The task of creating and managing more than 26 million individual accounts had yet to be determined. The sheer scale of this early “Big Data” project was daunting enough; press reports labeled it as the largest bookkeeping job of all time. In addition, the seemingly unrealistic timeframes – the law dictated that the program be in place by January 1, 1937 – were equally frightening. Some experts felt the task was impossible, and recommended that Roosevelt abandon it.

A 1937 Headline Announces the World’s “Biggest Bookkeeping Job”.

Click here for larger image

But the Social Security Administration stayed the course. In the summer of 1936, the agency collected proposals from various accounting equipment vendors, each suggesting their own approach to record-keeping.

IBM was ready to handle the challenge because it had a proven track record in large scale government accounting projects dating back to the 1920s. The company had the systems and process knowledge necessary to ensure that the Social Security program’s policies and procedures could be quickly developed and rapidly deployed. The depth of IBM’s proposal, as well as the government’s familiarity with IBM’s skills and equipment, convinced the Agency that the company had the most viable solution, and in September 1936, IBM was awarded the contract.

There was another factor. IBM’s CEO, Thomas Watson, Sr., continued to invest in research & development throughout the Depression. So when the Agency awarded IBM the contract and asked the company to invent a machine that would automatically and rapidly integrate payroll contributions into millions of individual accounts – something that was essential to the success of the program – IBM engineers were ready for the task. They developed the IBM 077 Collator, the machine that made Social Security a reality.

A Social Security Administration worker uses an IBM card punch to prepare cards for processing.

The invention of a new machine wasn’t the only challenge facing Social Security; the logistics of the program were equally daunting. The paper records alone took up 24,000 square feet of floor space. In fact, the weight of the paper records and IBM machines was so great that no building in Washington had floors sturdy enough to hold them, so operations were set up in an old Coca-Cola bottling plant on Baltimore’s waterfront.

The building was far from people friendly. It was cold in the winter, and hot in the summer. Plus, the summer heat brought with it the overpowering smells of rotting fish from the docks and spices from a local spice factory. The Social Security employees in the building also were plagued by sand fleas that lived in the sound-deadening sand barriers between floors.

When the IBM collators were put into action in June 1937, there was still much work to be done before the first Social Security check would be mailed to Miss Ida May Fuller in 1940. However, there were no longer doubts that the program was possible.

It was the close partnership between IBM and the Social Security Administration that created the record keeping system that made Roosevelt’s vision a reality. The partnership improved the quality of life for generations of Americans. It also catapulted IBM from a mid-sized company to the world’s leading information management provider.

But beyond the monumental size and scope of the project, the real significance of Social Security was that it proved that public-private partnerships could roll out enormous solutions to meet grand challenges, promote economic growth and help society.

Public-private partnerships aren’t easy. You need to balance different concerns and learn to work together. But when you do, these partnerships work, and they are essential for driving business and societal growth for the long term. From Social Security to IBM’s work withsmarter cities around the world, public-private partnerships demonstrate that collaboration is the key to innovation.

Jonathan Fanton, Ph.D., is the Franklin Delano Roosevelt Visiting Fellow at the Roosevelt House Public Policy Institute at Hunter College in New York City. Dr. Fanton previously served as President of the John D. and Catherine T. MacArthur Foundation, and as President of the New School for Social Research.

IBM aims to transform Big Data into Business Opportunities

According to an article from Netezza, “IBM Debuts New Analytics Appliance to Help Retailers Transform Big Data Into Business Opportunities” IBM has  today announced a new analytics appliance that analyzes up to petabytes of big data including consumer sales data and online shopping trends to help retailers gain actionable insight on buying patterns.

The new appliance helps retailers deliver Smarter Commerce by using analytics to better understand buying patterns across multiple channels, and build stronger, more profitable customer relationships. Clients can now run complex, real-time analytics in a matter of seconds to improve the customer experience, shift marketing campaigns on the fly and boost sales.

For information about the IBM Netezza products, please visit: www.thinking.netezza.comhttp://www-01.ibm.com/software/data/netezza/

 

 

 

IBM answering to Oracle NoSQL database

Following the recent Oracle attack on the NoSQL market ,with the announcement of its Orcale NoSQL solution,  IBM ‘s response didn’t last and unveil its plans to roll out NoSQL technology inside the DB2 product line.

 

According to Curt Cotner, the company’s vice president and chief technology officer for database servers, who spoke yesterday during a keynote address at IBM’s Information On Demand 2011 conference:

 “All of the DB2 and IBM Informix customers will have access to that and it will be part of your existing stack and you won’t have to pay extra for it,” Cotner said. “We’ll put that into our database products because we think that this is [something] that people want from their application programming experience, and it makes sense to put it natively inside of DB2.”

IBM’s plan to roll out NoSQL technology inside of DB2 made sense to conference attendee Gerard Ruppert, an IT consultant with John Daniel Associates in McKees Rocks, Pa.

“I think ultimately [IBM has] to go there because of the size of the data that’s moving around nowadays,” Ruppert said. “But it’s going to be a learning curve for a lot of the midmarket people because they just don’t have that expertise yet.”

The appeal of NoSQL lies in its ability to handle large volumes of data faster and more efficiently than traditional relational database management systems, according to Ruppert. He advised that before taking advantage of the new technology, organizations should make sure they have the right skills in-house. Those that don’t should consider bringing in some outside expertise before things get messed up, he added.

“In our own practice, we often go in and clean up after other people who don’t know what they’re doing,” he said.

NoSQL database management systems have a reputation for helping organizations analyze so-called big data stores. But “the jury is still out” on whether the technology is right for handling transactional systems, such as those used by banks and other institutions to process things like credit card orders, online purchases and stock trades.

“I think that if you asked our database guys, they would say that they’re generally not seeing deployments of technology like that for OLTP [online transaction processing] purposes,” said Ted Friedman, a data management analyst with Stamford, Conn.-based IT research firm Gartner Inc. “The vast majority of the usage is going in the analytics direction.”

Friedman added that IBM’s decision to offer NoSQL capabilities is in line with other industry giants who have made Hadoop, NoSQL and big data announcements of late. For example, Oracle yesterday announced the general availability of its new NoSQL database.

“It’s consistent with how we see the relational database model evolving over time.” he said. “IBM is doing it and others are as well. You saw Oracle at OpenWorld the other week making announcements around Hadoop and NoSQL capabilities and you see Microsoft doing some other things, so it’s a really big deal.”

 

First came the hardware, second the software and third is the age of data

Value have been first in the hardware as a first stage, in a second it has been within software and it seems “The age of data is upon us” declared Redmonk’s Stephen O’Grady at the Open Source Business Conference.

On a great articles available here which summarize O’Grady’s words: http://www.ecommercetimes.com/story/72471.html

Mainly it summarize the timefline as follow:

  1. The first stage, epitomized by IBM, held that the money was in the hardware and software was just an adjunct.
  2. Stage two, fired off by Microsoft, contended the money is in the software.
  3. Google epitomizes the third stage, where the money is not in the software, but software is a differentiator. “Google came up at a time when a lot of folks were building the Internet on the backs of some very expensive hardware and software. Google uses commodity hardware, free — meaning no-cost — software, and focuses on what it can do better than its competitors with that software.”

Wondering what could be the the fourth stage ?  It might be Facebook and Twitter. “Now, software is not even differentiating; it’s the value of the data. Facebook and Twitter monetize their data in different ways.”