Rapid Insight With Results: Harnessing Analytics in the Cloud

basirBy Basiruddin Syed
DB2 Social Marketing Manager

“It’s Now or Never” is a popular song recorded by Elvis Presley and the lyrics of this song seem to appeal to us now more than ever. Every day we are making decisions in our lives, both personal and financial. We are often seen contemplating which holidays to book, is this right time to invest, etc. We are constantly making decisions and our decisions are based on information, user reviews, and recommendations.

When individuals are facing such pressure to make quick decisions, Imagine how much harder it must be for larger organizations.

Decision-making can be regarded as the cognitive process resulting in the selection of a belief or a course of action among several alternative possibilities.

Individuals are facing a lot of pressure to make rapid decisions and choose the best course of action within a short span of time, and it much more difficult when the time drastically reduces for organizations who’s needs are much higher and the time frame is compressed.

Today’s data-driven organization is faced with magnified urgency around data volume, user needs and compressed decision time frames. In order to address these challenges, Organizations are exploring cloud based BI and analytical technology to accelerate decision making and enhance business performance.

Find out more on how organizations are making this possible “Rapid Insight with results: Harnessing analytics in the cloud”

 

Make Your Apps Highly Available and Scalable

By Vinayak Joshi
Senior Software Engineer, IBM

The IBM premium data-sharing technologies offer unmatched high-availability and scalability to applications. If you are a JDBC application developer wanting to explore how these benefits accrue to your application and whether you need to do anything special to exploit these benefits, my article – “Increase scalability and failure resilience of applications with IBM Data Server Driver for JDBC and SQLJ” – is a great source of information.

In the article, I explain how turning on a single switch on the IBM Data Server Driver for JDBC and SQLJ opens up all the workload balancing and high availability benefits to your JDBC applications. There is very little required for an application to unlock the workload balancing and high availability features built into the DB2 server and driver technologies.

For those curious about  how the driver achieves this in tandem with pureScale and sysplex server technologies, the article should provide a good end-to-end view. While all the nuts and bolts explanations are provided, it is stressed that all of it happens under the covers, and beyond the bare minimum understanding, application developers and DBA’s need not concern themselves with it too much if they do not wish to.

The aspects a developer needs to keep in mind are highlighted and recommendations on configuring and tuning applications are provided.  We’ve made efforts to keep the reading technically accurate while keeping the language simple enough for a non-technical audience to grasp.

Any and all feedback shall be much appreciated and taken into account. Take a look at the article by clicking here, and feel free to share your thoughts in the comment section below

I’m Your Big Data Problem: Integrate THIS!

RachelBlandby Rachel Bland
Senior Product Manager, IBM Business Analytics Growth Initiatives

Let’s take a look at my profile. Gen X-er. Young kid, busy job, work-life integrated. I will not answer your online surveys, I won’t even answer the phone unless I know who it is because if you really knew me you’d send a text. I don’t listen to voice mail – too disruptive.

I dare you to find me, understanding me, and approach me. The only information you are going to get is spread haphazardly all over the web. My likes and dislikes are out there if you’re looking.

Where is it all?  Amazon has a slice, a big slice of very relevant purchasing info, so does Zappos. UPS and Fedex visit my house every day. Fitbit and MyFitnessPal are my most frequently updated apps. Yahoo has my personal email, Gmail is my dumping ground for spammers. Facebook is my personal life, LinkedIn and Twitter have my professional life. Comcast provides my phone and internet service but keeps calling me looking for someone else.

Sure, I’m difficult, but I’m still a pretty attractive, albeit elusive consumer. I have a good income, a house that needs some work, a penchant for retail therapy, and a pretty decent appetite for convenience products and services. If you’ve got something I want, deliver free, and take returns by mail; there’s a good chance I’ll give you a chance. If only you could read my mind and figure out what I’ll buy next!

This is small example of the tremendous amounts of customer data available to businesses. How you get that data and what you do with it is what will separate the haves from the have nots. In order to truly know your customers and your market segment, you have to do the work.

The reality is that the possibilities to tap into new markets, identify innovative efficiencies and just run better and smoother are there; what’s also very real is the perception that the technical challenge is insurmountable. Well, not so much.  We’ve learned a lot in the past few years as the wealth of information from analysts and vendors demonstrates. The IBM Institute for Business Value has blue prints to help you identify the opportunities for value, and industry experts like Tony Curcio and Ray Wang from Constellation Research can provide you will best-practices for the steps along the way.

934839_10152281745660872_7673677487629968528_n

Spend an hour with us on September 11th and hear what experts Ray Wang and Tony Curcio have to say:

Successful Big Data Projects require Big Data Integration, since most Hadoop initiatives involve collecting, moving, transforming, cleansing, integrating, exploring, and analyzing volumes of disparate data.

Register for this webinar to learn about:

  • The current state of the Big Data market
  • Customer success stories with Big Data and Big Data Integration
  • The 5 best practices when it comes to Big Data Integration

Influence Your Data Management Future Today

RadhaBy Radha Gowda
Product Marketing Manager, DB2 and related offerings

Too many tools! Too many repositories! Too many installs! Focus on enterprise deployment rather than database deployment.  Any of these sounds familiar?

Yes, we heard you. While IBM offers an impressive portfolio of data management tools to manage the complete data life cycle, we agree that there are far too many tools. We want to help you streamline your data management processes and make you even more productive. We are happy to introduce you to the next generation of data management tool for DB2 for Linux, UNIX and Windows – IBM Data Server Manager (beta). It is simple to install, easy to use and enterprise ready with the ability to manage hundreds of databases

Picture2

IBM Data Server Manager integrates key capabilities from the existing portfolio of data management tools. It offers a simple integrated web console to administer, monitor, manage, and optimize hundreds of DB2 for Linux, UNIX and Windows databases across the enterprise. It helps you identify, diagnose, solve, and prevent performance problems. Of course, not to underestimate the expert guidance you will receive to optimize query performance, which tables to convert to column-organized format or create shadow tables for — to optimize with BLU Acceleration, identify storage saving opportunities and more. And, it provides centralized client and server configuration management so you understand and control your environment more efficiently. Best of all, it is quick and easy to deploy.

IBM Data Server Manager beta software is available for Linux, AIX and Windows platforms. Early feedback on the tool has been very positive; we invite you to sign up for the beta program today and start influencing your data management future.

IBM Data Sever Manager – Simple. Scalable. Smart.

Troubles Are Out of Reach With Instant Insights

RadhaBy Radha Gowda
Product Marketing Manager, DB2 and related offerings

Bet you have been hearing a lot about shadow tables in DB2 “Cancun Release” lately.  Umm… do shadow and Cancun remind you of On the beach by Cliff Richards and the Shadows?  Seriously, DB2 shadow tables can make you dance to a rock ‘n’ roll on the beach because you will be trouble free with real-time insights into your operations and of course, lots of free time.

What is a shadow table?

Shadow tables have been around since the beginning of modern computing – primarily for improving performance.  So what does the DB2 shadow table offer? The best of both OLTP and OLAP worlds!  You can now run your analytic reports directly in OLTP environment with better performance.

Typically organizations have separate OLTP and OLAP environments – either due to resource constraints or to ensure the best OLTP performance.   The front-end OLTP is characterized by very small, but high volume transactions. Indexes are created to improve performance.  In contrast, the back-end OLAP has long-running complex transactions that are relatively small in number. Indexes are created, but they may be different from OLTP indexes.  Of course, an ETL operation must transfer data from OLTP database to OLAP data mart/warehouse at time intervals that may vary from minutes to days.

DB2 can help you simplify your infrastructure and operations with shadow tables. Shadow table is a column organized copy of a row-organized table within the OLTP environment, and may include all or a subset of columns.  Because the table is column organized, you get the benefit of enhanced performance that BLU Acceleration provides for analytic queries.

How do shadow table work?

shadow tables

Shadow table is implemented as a materialized query table (MQT) that is maintained by replication. IBM InfoSphere Change Data Capture for DB2, available in advanced editions, maintains shadow tables through automatic and incremental synchronization of row-organized tables.

While all applications can access the row-organized table by default, DB2 optimizer will perform the latency-based routing to determine whether a query needs to be routed to shadow tables or the row-organized source.

A truly flexible and trouble-free OLTP world

Shadow tables offer the incredible speed you have come to expect from BLU Acceleration while the source tables remain row-organized to best suit OLTP operations.  In fact, with shadow tables, the performance of analytical queries can improve by 10x or more, with equal or greater transactional performance*.

With instant insight into “as it happens” data for all your questions and all the free time you’ll have with no more indexing/tuning, what’s not to like? Try DB2 today

* Based on internal IBM testing of sample transactional and analytic workloads by replacing 4 secondary analytical indexes in the transactional environment with BLU Shadow Tables. Performance improvement figures are cumulative of all queries in the workload. Individual results will vary depending on individual workloads, configurations and conditions.

Upgrading Your Business With Next-Generation Data Technologies

leskingBy Les King
Director, Big Data, Analytics and Database Solutions – Information Management, Software Group

Out in the field I meet with businesses daily to help demonstrate how they can leverage big data solutions to increase their bottom line. As I’m talking with clients one of the topics that continually comes up is what this will mean for their existing enterprise-class analytic infrastructure.

As existing analytic infrastructures evolve and modernize, there continues to be a fit for purpose design point. Leveraging new data solutions should be seen as an enhancement, extension, evolution or modernization of an existing analytic environment and not be seen as a rip and replace approach.

Some enterprise architects, attracted to the scalability and cost effectiveness of hadoop based solutions, jumped the gun and thought this would be a panacea. A new, less expensive platform, which would replace what is perceived to be more expensive analytic solutions currently being used. This has not turned out to be the case.

Here are the key application categories which I have seen:

  • New, high data volume solutions — these are usually best served by a hadoop based architecture. For example, a landing zone for data. From this landing zone you determine which subsets of that data you wish to work with further
  • Existing, high data volume solutions — these are currently handled by traditional database solutions. There are cost reduction opportunities here with hadoop based architecture. An example would be a high data volume queryable archive
  • Existing and new, critically performing solutions — these leverage traditional databases today and should stay where they are. Additional value can be achieved through technologies such as dynamic in-memory and columnar.
  • Existing and new, deep analytic and/or modelling solutions — these can be quite intensive and are quite often best suited for traditional databases, again leveraging the ever improving technologies for BI solutions.

Don’t be afraid of the complexity of these categories. The reality is many of these components are already in place today. By leveraging your existing infrastructure, you also are simplifying the process of ensuring high availability, disaster recovery, security, data life cycle governance and resilience are all in place.

These are all key characteristics of an enterprise-class analytics infrastructure. From an application standpoint, technologies such as BigSQL solutions provides a way for SQL application developers and SQL generating BI tools to work with data regardless of where it sits in this modernized analytic ecosystem. A good example of this is BigSQL 3.0 capabilities which is included in IBM’s BigInsights,

Here is a recently published article by David Loshin, President of Knowledge Integrity Inc., who discusses this exact topic of evolving an enterprise-class analytics infrastructure:   http://public.dhe.ibm.com/common/ssi/ecm/en/iml14440usen/IML14440USEN.PDF

Coming Events: September Webinars, Webcasts and Conferences for DB2 Professionals

As the summer fades and the kids head back to school, it’s also time for you to learn something new! Here is list of online webinars and live events throughout the month of September. Check your Calendars, and join us to find out more:

Webcast: Is Your Database a Hero or a Hindrance?September 4, 2014 – Noon EDT
NOW available ON DEMAND!
Presented by Kelly Schlamblearn which databases work for you versus the ones that work against you. See how the right database architecture can help you achieve your SLAs and give application developers the freedom and flexibility to focus on their code, not the underlying infrastructure. Hosted by InformationWeek,

DB2 LUW V10.5 Tips, Upgrades, and Best Practices - September 5, 2014 – 10:00 AM EDT
Get the replay:
Presented by Melanie Stopher – this webinar will take you through a series of everyday DB2 LUW V10.5 tips, advice, upgrades, and best practices. A webinar you don’t want to miss! Hosted by the DB2Night Show.

Who’s Afraid of the Big (Data) Bad Wolf? Are you? - September 11, 2014 – 3:00 PM EDT
Presented by Analyst R “Ray” Wang and IBM’s Tony Curcio – this webinar will help viewers gain a better understanding  of client experiences with Big Data projects, and also talk about best practices for big data integration, all to help your organization tame the big (data) bad wolf. Hosted by InformationWeek.

IDUG Sydney – Sept. 10-12 – Sydney, Australia
Travel “down under” for 2 days of technical presentation sessions, educational seminars and a series of great guest speakers.

Webcast: Accelerating Your Analytics for Faster Insight – September 18, 2:00 PM EDT
Join us to learn about cutting-edge tools and techniques from the best and brightest minds in the industry. Don’t miss this opportunity to hear how other organizations competitively differentiate themselves by improving access, performance and productivity of their analytical systems. Speakers include Peter Hoopes, Mark Theissen, Amit Patel. Hosted by Database Trends and Applications

TDWI World Conference – September 21-26
TDWI World Conferences provide the leading forum for business and technology professionals looking to gain in-depth, vendor-neutral education on business  intelligence and data warehousing. Designed specifically to maximize your learning experience and training investments, the information you gain and key contacts you make at these events enable you to immediately impact your current initiatives. TDWI World Conferences feature basic to advanced courses, peer networking, one-on-one consulting, certification, and more.

Webcast: How to Revolutionize Analytics with Next-Generation In-Memory Computing - September 25, 2014 – Noon EDT
Speakers: Brenda Boshoff, Amit Patel – register for this webinar to learn how you can gain a sustainable competitive advantage and take your organization to a new level with IBM’s next generation in-memory computing. Hosted by InformationWeek

 

 

pureScale at the Beach. – What’s New in the DB2 “Cancun Release”

KellySchlamb Kelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

cancun_beachToday, I’m thinking about the beach. We’re heading into the last long weekend of the summer, the weather is supposed to be nice, and later today I’ll be going up to the lake with my family. But that’s not really why the beach is on my mind. Today, the DB2 “Cancun Release” was announced and made available, and as somebody that works extensively with DB2 and pureScale, it’s a pretty exciting day.

I can guarantee you that you that over the next little while, you’re going to be hearing a lot about the various new features and capabilities in the “Cancun Release” (also referred to as Cancun Release 10.5.0.4 or DB2 10.5 FP4). For instance, the new Shadow Tables feature — which exploits DB2 BLU Acceleration — allows for real-time analytics processing and reporting on your transactional database system. Game changing stuff. However, I’m going to leave those discussions up to others or for another time and today I’m going to focus on what’s new for pureScale.

As with any major new release, some things are flashy and exciting, while other things don’t have that same flash but make a real difference in the every day life of a DBA. Examples of the latter in Cancun include the ability to perform online table reorgs and incremental backups (along with support for DB2 Merge Backup) in a pureScale environment, additional Optim Performance Manager (OPM) monitoring metrics and alerts around the use of HADR with pureScale, and being able to take GPFS snapshot backups. All of this leads to improved administration and availability.

There’s a large DB2 pureScale community out there and over the last few years we’ve received a lot of great feedback on the up and running experience. Based on this, various enhancements have been made to provide faster time to value, with the improved ease of use and serviceability of installation, configuration, and updates. This includes improved installation documentation, enhanced prerequisite checking, beefing up some of the more common error and warning messages, improved usability for online fix pack updates, and the ability to perform version upgrades of DB2 members and CFs in parallel.

In my opinion, the biggest news (and yes, the flashiest stuff) is the addition of new deployment options for pureScale. Previously, the implementation of a DB2 pureScale cluster required specialized network adapters — RDMA-capable InfiniBand or RoCE (RDMA over Converged Ethernet) adapter cards. RDMA stands for Remote Direct Memory Access and it allows for direct memory access from one computer into that of another without involving either one’s kernel, so there’s no interrupt handling and no context-switching that takes place as part of sending a message via RDMA (unlike with TCP/IP-based communication). This allows for very high-throughput, low-latency message passing, which DB2 pureScale uniquely exploits for very fast performance and scalability. Great upside, but a downside is the requirement on these adapters and an environment that supports them.

Starting in the DB2 Cancun Release, a regular, commodity TCP/IP-based interconnect can be used instead (often referred to as using “TCP/IP sockets”). What this gives you is an environment that has all of the high availability aspects of an RDMA-based pureScale cluster, but it isn’t necessarily going to perform or scale as well as an RDMA-based cluster will. However, this is going to be perfectly fine for many scenarios. Think about your daily drive to work. While you’d like to have a fast sports car for the drive in, it isn’t necessary for that particular need (maybe that’s a bad example — I’m still trying to convince my wife of that one). With pureScale, there are cases where availability is the predominant motivator for using it and there might not be a need to drive through massive amounts of transactions per second or scale up to tens of nodes. Your performance and scalability needs will dictate whether RDMA is required or not for your environment. By the way, you might see this feature referred to as pureScale “lite”. I’m still slowly warming up to that term, but the important thing is people know that “lite” doesn’t imply lower levels of availability.

With the ability to do this TCP/IP sockets-based communication between nodes, it also opens up more virtualization options. For example DB2 pureScale can be implemented using TCP/IP sockets in both VMware (Linux) and KVM (Linux) on Intel, as well as in AIX LPARs on Power boxes. These virtualized environments provide a lower cost of entry and are perfect for development, production environments with moderate workloads, QA, or just getting yourself some hands-on experience with pureScale.

It’s also worth pointing out that DB2 pureScale now supports and is optimized for IBM’s new POWER8 platform.

Having all of these new deployment options changes the economics of continuous availability, allowing broad infrastructure choices at every price point.

One thing that all of this should show you is the continued focus and investment in the DB2 pureScale technology by IBM research and development. With all of the press and fanfare around BLU, people often ask me if this is at the expense of IBM’s other technologies such as pureScale. You can see that this is definitely not the case. In fact, if you happen to be at Insight 2014 (formerly known as IOD) in Las Vegas in October, or at IDUG EMEA in Prague in November, I’ll be giving a presentation on everything new for pureScale in DB2 10.5, up to and including the “Cancun Release”. It’s an impressive amount of features that’s hard to squeeze into an hour. :-)

For more information on what’s new for pureScale and DB2 in general with this new release, check out the fix pack summary page in the DB2 Information Center.

Is Your Database a Hero or a Hindrance?

KellySchlamb Kelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

Here’s a big question for you – Is your database a hero or a hindrance? In other words, is your database environment one that’s helping your organization meet your performance, scalability, and availability needs or is it holding you back from meeting your SLAs and keeping up with ever changing business needs?

Join me for an Information Week webinar on this topic next week — Thursday, September 4th at 12pm EDT — where I’ll be talking about these types of challenges faced by IT organizations and how DB2 has the capabilities to address those challenges.  News about some of these capabilities will be hot off the press and so you won’t want to miss it.

Click here to register

Webcast(Hero)-LIVE

Steps toward the Future: How IBM DB2 is changing the Game

toriTori McClellan
Super Awesome Social Media Intern

 

Welcome to the New Age of database technology!

IBM DB2 with BLU Acceleration changes the game for in-memory computing.  Due the importance of in-memory computing, we created a dedicated website to take you through all the details, references, and more: www.ibmbluhub.com !  This website is in place to help clients and prospects understand what next-gen in-memory computing can do for them and why IBM BLU is the ideal in-memory database to deliver fast answers.

A few examples of how IBM BLU has helped other clients find their ideal balance between speed and quality:

  1. Regulatory reporting is a huge challenge for all banks - Handelsbanken, one of the most profitable banks in the world, is currently doing reports monthly but are expected to do them daily in the near future. DB2 with BLU Acceleration has helped Handelsbanken analysts get the data they need for daily reports via columnar store. Learn more by watching this video: http://bit.ly/1u7urAA
  2.  Deploying DB2 with BLU Acceleration is simple - with only a handful of commands, you can turn on analytics mode, create a new or auto-configure an existing database to make best use of your hardware for analytics, and then load the data. Learn more from this IBM Redbook that introduces the concepts of DB2 with BLU Acceleration from ground up and describe the technologies that work hand-in-hand with BLU Acceleration: Architecting and Deploying IBM DB2 with BLU Acceleration in Your Analytical Environment.
  3.  Get the FACTS and stay current by subscribing to the ibmbluhub.com newsletter.

- IBM DB2 with BLU Acceleration is a revolutionary technology and delivers breakthrough performance improvements for analytic queries by using dynamic in-memory columnar technologies.

- Different from other vendor solutions, BLU Acceleration allows the unified computing of online transaction processing (OLTP) and analytics data inside a single database, therefore, removing barriers and accelerating results for users. With observed hundredfold improvement in query response time, BLU Acceleration provides a simple, fast, and easy-to-use solution for the needs of today’s organizations; quick access to business answers can be used to gain a competitive edge, lower costs, and more.

- Subscribe to the newsletter to continue learning about this hot in-memory database.  You will receive a periodic iNews email, which links to what’s new.  Just click and learn: http://www.ibmbluhub.com/blu-inews/

ToriBlog

If this information suits your needs, be sure to follow @IBM_DB2 on twitter. Get your information as it is being published.

Follow

Get every new post delivered to your Inbox.

Join 37 other followers