A free ebook will be available for you to download in the coming days on this page
Just in time for IDUG, Paul Zikopoulos and his team of co-authors have created a new ebook for you to deepen your skills in regards to the latest release. Here are some details about the flashbook:
DB2 10.5 with BLU Acceleration – New Dynamic In-Memory Analytics for the Era of Big Data
Paul Zikopoulos, Matthew Huras, George Baklarz, Sam Lightstone, Aamer Sachedina
Technical editor: Roman B. Melnyk
Speed of Thought Analytics with new BLU Acceleration
Always Available Transactions with enhanced pureScale reliability
Unprecedented Affordability with optimization for SAP workloads
Future Proof Versatility with business grade NoSQL and mobile database for greater application flexibility
If big data is an untapped natural resource, how can you find the gold dust hidden within? Leaders realize that big data means all data, and are moving quickly to understand both structured and unstructured application data. However, analyzing this data without impacting the performance and reliability of essential business applications can prove costly and complex.
In the new era of big data, businesses require data systems that can blend always available transactions with speed of thought analytics. DB2 10.5 with new BLU Acceleration provides this speed, simplicity and cost efficiency while providing the ability to build next-generation applications with NoSQL features.
With this book, you’ll learn about the power and flexibility of multi-workload, multi-platform database software. Use the comprehensive knowledge from this book to get started with the latest DB2 release by downloading the trial version. Visit ibm.
“In contrast to some competitors, the company believes Big Data isn’t some new issue requiring emerging or arcane technologies,” said analyst Charles King on IBM’s BLU Acceleration technologies. “Instead, IBM views Big Data as a fundamental challenge that stretches across the IT landscape, tangibly affecting the technology market as a whole.”
Three new products have just been rolled out by IBM, including technologies that promise 25 times faster reporting and analytics.
About “BLU Acceleration”:
IBM, at its annual investor briefing, revealed that the giant is increasing its revenue target for big data and analytics from $16 billion previously to $20 billion for its 2015 targets. Some of IBM’s strategic growth initiatives include – business analytics, Smarter Planet, cloud computing, and emerging growth markets. These are supposedly key drivers of growth for IBM’s 2015 goals.
Big Money For Big Data
Cost is a very important factor for several corporations. It is one of the biggest hurdles to carve out a big data strategy, depending on how much data we’re actually looking at. Big Data spending is all set to cross $25 billion in 2013 alone. Corporations in certain segments, especially those anticipating a much larger data volume, are keener on keeping data online (on private clouds) so that they comply with certain regulations. Enterprises are also looking at making the most of all the raw data they can get hold of from various sources to judge better analytics. But with hardware prices continuously falling, spending should also come down – depending on what platforms and services corporations select as a part of their big data strategy.
Big Companies Will Make Big, Big Data Acquisitions
Corporations like IBM and Oracle will not spend money buying services from other companies and instead will look into tapping the growing demand by acquiring smaller companies. Both Oracle and IBM would strive towards becoming one of the biggest big data service providers to companies worldwide.
Hadoop Alternatives May Rise
Apache’s Hadoop has gained significant popularity amongst the enterprise sector. Several banking institutions, communication firms and even retailers are using Hadoop as a part of their core big data strategy. However this year, things may slightly change for Apache. Corporations are now slowly adopting a stack of different open technologies like private clouds to easily manage data – as a combination of databases and data warehouse environments. Enterprises are keener on getting their solutions placed right into their strategy without causing any trouble to their existing IT placements.
Infographics “Taming big data” provided by IBM.
Certain things cannot be overlooked when dealing with data. Best practices must be instituted for the care of big data just as they have long been in small data. Before enjoying big data’s amazing analytical feats, you must first get it under control – with tools that are up to the challenge of implementing best practices in a big data world.
Big data is the core of your new enterprise application architecture. In the broader evolutionary picture, analytics and transactions will share a common big data infrastructure, encompassing storage, processing, memory, networking and other resources. More often than not, these workloads will run on distinct performance-optimized integrated systems, but will interoperate through a common architectural backbone.
Deploying a big-data infrastructure that does justice to both analytic and transactional applications can be challenging, especially when you lack platforms that are optimized to handle each type of workload. But the situation is improving. A key milestone in the evolution of big data toward agile support for analytics-optimized transactions is today, October 9, 2012, with the release of IBM PureData System. This is a new family of workload-specific, hardware/software expert integrated systems for both analytics and transactions. IBM has launched workload-optimized new systems for transactions (IBM PureData System for Transactions), data warehousing and advanced analytics (IBM PureData System for Analytics), and real-time business intelligence, online analytical processing and text analytics (IBM PureData System for Operational Analytics).
What are the common design principles that all of the PureData System platforms embody, and which they share with other PureSystems solutions? They all incorporate the following core features:
Taken together, these principles enable the PureData platforms to realize fast business value, reduce total cost of ownership, and support maximum scalability and performance on a wide range of analytics and transactional workloads. These same principles are also the architectural backbone for the recently released IBM PureApplication Systems and IBM PureFlex Systems platforms.
Great post from IBM , celebrating this year’s Social Security’s 75th anniversary. It was exactly 75 years ago this month that IBM delivered to the U.S. government the machines that were credited with making the program possible.
During the Great Depression, President Franklin Delano Roosevelt conceived of Social Security as a program for senior citizens, the disabled, the unemployed, widows and orphans who lacked financial protection. However, when Roosevelt signed the Social Security Act into law in August, 1935, the document did not say how the details would play out.
The task of creating and managing more than 26 million individual accounts had yet to be determined. The sheer scale of this early “Big Data” project was daunting enough; press reports labeled it as the largest bookkeeping job of all time. In addition, the seemingly unrealistic timeframes – the law dictated that the program be in place by January 1, 1937 – were equally frightening. Some experts felt the task was impossible, and recommended that Roosevelt abandon it.
A 1937 Headline Announces the World’s “Biggest Bookkeeping Job”.
But the Social Security Administration stayed the course. In the summer of 1936, the agency collected proposals from various accounting equipment vendors, each suggesting their own approach to record-keeping.
IBM was ready to handle the challenge because it had a proven track record in large scale government accounting projects dating back to the 1920s. The company had the systems and process knowledge necessary to ensure that the Social Security program’s policies and procedures could be quickly developed and rapidly deployed. The depth of IBM’s proposal, as well as the government’s familiarity with IBM’s skills and equipment, convinced the Agency that the company had the most viable solution, and in September 1936, IBM was awarded the contract.
There was another factor. IBM’s CEO, Thomas Watson, Sr., continued to invest in research & development throughout the Depression. So when the Agency awarded IBM the contract and asked the company to invent a machine that would automatically and rapidly integrate payroll contributions into millions of individual accounts – something that was essential to the success of the program – IBM engineers were ready for the task. They developed the IBM 077 Collator, the machine that made Social Security a reality.
A Social Security Administration worker uses an IBM card punch to prepare cards for processing.
The invention of a new machine wasn’t the only challenge facing Social Security; the logistics of the program were equally daunting. The paper records alone took up 24,000 square feet of floor space. In fact, the weight of the paper records and IBM machines was so great that no building in Washington had floors sturdy enough to hold them, so operations were set up in an old Coca-Cola bottling plant on Baltimore’s waterfront.
The building was far from people friendly. It was cold in the winter, and hot in the summer. Plus, the summer heat brought with it the overpowering smells of rotting fish from the docks and spices from a local spice factory. The Social Security employees in the building also were plagued by sand fleas that lived in the sound-deadening sand barriers between floors.
When the IBM collators were put into action in June 1937, there was still much work to be done before the first Social Security check would be mailed to Miss Ida May Fuller in 1940. However, there were no longer doubts that the program was possible.
It was the close partnership between IBM and the Social Security Administration that created the record keeping system that made Roosevelt’s vision a reality. The partnership improved the quality of life for generations of Americans. It also catapulted IBM from a mid-sized company to the world’s leading information management provider.
But beyond the monumental size and scope of the project, the real significance of Social Security was that it proved that public-private partnerships could roll out enormous solutions to meet grand challenges, promote economic growth and help society.
Public-private partnerships aren’t easy. You need to balance different concerns and learn to work together. But when you do, these partnerships work, and they are essential for driving business and societal growth for the long term. From Social Security to IBM’s work withsmarter cities around the world, public-private partnerships demonstrate that collaboration is the key to innovation.
Jonathan Fanton, Ph.D., is the Franklin Delano Roosevelt Visiting Fellow at the Roosevelt House Public Policy Institute at Hunter College in New York City. Dr. Fanton previously served as President of the John D. and Catherine T. MacArthur Foundation, and as President of the New School for Social Research.
According to an article from Netezza, “IBM Debuts New Analytics Appliance to Help Retailers Transform Big Data Into Business Opportunities” IBM has today announced a new analytics appliance that analyzes up to petabytes of big data including consumer sales data and online shopping trends to help retailers gain actionable insight on buying patterns.
The new appliance helps retailers deliver Smarter Commerce by using analytics to better understand buying patterns across multiple channels, and build stronger, more profitable customer relationships. Clients can now run complex, real-time analytics in a matter of seconds to improve the customer experience, shift marketing campaigns on the fly and boost sales.
Following the recent Oracle attack on the NoSQL market ,with the announcement of its Orcale NoSQL solution, IBM ‘s response didn’t last and unveil its plans to roll out NoSQL technology inside the DB2 product line.
According to Curt Cotner, the company’s vice president and chief technology officer for database servers, who spoke yesterday during a keynote address at IBM’s Information On Demand 2011 conference:“All of the DB2 and IBM Informix customers will have access to that and it will be part of your existing stack and you won’t have to pay extra for it,” Cotner said. “We’ll put that into our database products because we think that this is [something] that people want from their application programming experience, and it makes sense to put it natively inside of DB2.”
IBM’s plan to roll out NoSQL technology inside of DB2 made sense to conference attendee Gerard Ruppert, an IT consultant with John Daniel Associates in McKees Rocks, Pa.“I think ultimately [IBM has] to go there because of the size of the data that’s moving around nowadays,” Ruppert said. “But it’s going to be a learning curve for a lot of the midmarket people because they just don’t have that expertise yet.”
The appeal of NoSQL lies in its ability to handle large volumes of data faster and more efficiently than traditional relational database management systems, according to Ruppert. He advised that before taking advantage of the new technology, organizations should make sure they have the right skills in-house. Those that don’t should consider bringing in some outside expertise before things get messed up, he added.“In our own practice, we often go in and clean up after other people who don’t know what they’re doing,” he said.
NoSQL database management systems have a reputation for helping organizations analyze so-called big data stores. But “the jury is still out” on whether the technology is right for handling transactional systems, such as those used by banks and other institutions to process things like credit card orders, online purchases and stock trades.“I think that if you asked our database guys, they would say that they’re generally not seeing deployments of technology like that for OLTP [online transaction processing] purposes,” said Ted Friedman, a data management analyst with Stamford, Conn.-based IT research firm Gartner Inc. “The vast majority of the usage is going in the analytics direction.”
Friedman added that IBM’s decision to offer NoSQL capabilities is in line with other industry giants who have made Hadoop, NoSQL and big data announcements of late. For example, Oracle yesterday announced the general availability of its new NoSQL database.“It’s consistent with how we see the relational database model evolving over time.” he said. “IBM is doing it and others are as well. You saw Oracle at OpenWorld the other week making announcements around Hadoop and NoSQL capabilities and you see Microsoft doing some other things, so it’s a really big deal.”
Value have been first in the hardware as a first stage, in a second it has been within software and it seems “The age of data is upon us” declared Redmonk’s Stephen O’Grady at the Open Source Business Conference.
On a great articles available here which summarize O’Grady’s words: http://www.ecommercetimes.com/story/72471.html
Mainly it summarize the timefline as follow:
Wondering what could be the the fourth stage ? It might be Facebook and Twitter. “Now, software is not even differentiating; it’s the value of the data. Facebook and Twitter monetize their data in different ways.”