Coming Events: September Webinars, Webcasts and Conferences for DB2 Professionals

As the summer fades and the kids head back to school, it’s also time for you to learn something new! Here is list of online webinars and live events throughout the month of September. Check your Calendars, and join us to find out more:

Webcast: Is Your Database a Hero or a Hindrance?September 4, 2014 – Noon EDT
NOW available ON DEMAND!
Presented by Kelly Schlamblearn which databases work for you versus the ones that work against you. See how the right database architecture can help you achieve your SLAs and give application developers the freedom and flexibility to focus on their code, not the underlying infrastructure. Hosted by InformationWeek,

DB2 LUW V10.5 Tips, Upgrades, and Best Practices - September 5, 2014 – 10:00 AM EDT
Get the replay:
Presented by Melanie Stopher – this webinar will take you through a series of everyday DB2 LUW V10.5 tips, advice, upgrades, and best practices. A webinar you don’t want to miss! Hosted by the DB2Night Show.

Who’s Afraid of the Big (Data) Bad Wolf? Are you? - September 11, 2014 – 3:00 PM EDT
Presented by Analyst R “Ray” Wang and IBM’s Tony Curcio – this webinar will help viewers gain a better understanding  of client experiences with Big Data projects, and also talk about best practices for big data integration, all to help your organization tame the big (data) bad wolf. Hosted by InformationWeek.

IDUG Sydney – Sept. 10-12 – Sydney, Australia
Travel “down under” for 2 days of technical presentation sessions, educational seminars and a series of great guest speakers.

Webcast: Accelerating Your Analytics for Faster Insight – September 18, 2:00 PM EDT
Join us to learn about cutting-edge tools and techniques from the best and brightest minds in the industry. Don’t miss this opportunity to hear how other organizations competitively differentiate themselves by improving access, performance and productivity of their analytical systems. Speakers include Peter Hoopes, Mark Theissen, Amit Patel. Hosted by Database Trends and Applications

TDWI World Conference – September 21-26
TDWI World Conferences provide the leading forum for business and technology professionals looking to gain in-depth, vendor-neutral education on business  intelligence and data warehousing. Designed specifically to maximize your learning experience and training investments, the information you gain and key contacts you make at these events enable you to immediately impact your current initiatives. TDWI World Conferences feature basic to advanced courses, peer networking, one-on-one consulting, certification, and more.

Webcast: How to Revolutionize Analytics with Next-Generation In-Memory Computing - September 25, 2014 – Noon EDT
Speakers: Brenda Boshoff, Amit Patel – register for this webinar to learn how you can gain a sustainable competitive advantage and take your organization to a new level with IBM’s next generation in-memory computing. Hosted by InformationWeek

 

 

pureScale at the Beach. – What’s New in the DB2 “Cancun Release”

KellySchlamb Kelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

cancun_beachToday, I’m thinking about the beach. We’re heading into the last long weekend of the summer, the weather is supposed to be nice, and later today I’ll be going up to the lake with my family. But that’s not really why the beach is on my mind. Today, the DB2 “Cancun Release” was announced and made available, and as somebody that works extensively with DB2 and pureScale, it’s a pretty exciting day.

I can guarantee you that you that over the next little while, you’re going to be hearing a lot about the various new features and capabilities in the “Cancun Release” (also referred to as Cancun Release 10.5.0.4 or DB2 10.5 FP4). For instance, the new Shadow Tables feature — which exploits DB2 BLU Acceleration — allows for real-time analytics processing and reporting on your transactional database system. Game changing stuff. However, I’m going to leave those discussions up to others or for another time and today I’m going to focus on what’s new for pureScale.

As with any major new release, some things are flashy and exciting, while other things don’t have that same flash but make a real difference in the every day life of a DBA. Examples of the latter in Cancun include the ability to perform online table reorgs and incremental backups (along with support for DB2 Merge Backup) in a pureScale environment, additional Optim Performance Manager (OPM) monitoring metrics and alerts around the use of HADR with pureScale, and being able to take GPFS snapshot backups. All of this leads to improved administration and availability.

There’s a large DB2 pureScale community out there and over the last few years we’ve received a lot of great feedback on the up and running experience. Based on this, various enhancements have been made to provide faster time to value, with the improved ease of use and serviceability of installation, configuration, and updates. This includes improved installation documentation, enhanced prerequisite checking, beefing up some of the more common error and warning messages, improved usability for online fix pack updates, and the ability to perform version upgrades of DB2 members and CFs in parallel.

In my opinion, the biggest news (and yes, the flashiest stuff) is the addition of new deployment options for pureScale. Previously, the implementation of a DB2 pureScale cluster required specialized network adapters — RDMA-capable InfiniBand or RoCE (RDMA over Converged Ethernet) adapter cards. RDMA stands for Remote Direct Memory Access and it allows for direct memory access from one computer into that of another without involving either one’s kernel, so there’s no interrupt handling and no context-switching that takes place as part of sending a message via RDMA (unlike with TCP/IP-based communication). This allows for very high-throughput, low-latency message passing, which DB2 pureScale uniquely exploits for very fast performance and scalability. Great upside, but a downside is the requirement on these adapters and an environment that supports them.

Starting in the DB2 Cancun Release, a regular, commodity TCP/IP-based interconnect can be used instead (often referred to as using “TCP/IP sockets”). What this gives you is an environment that has all of the high availability aspects of an RDMA-based pureScale cluster, but it isn’t necessarily going to perform or scale as well as an RDMA-based cluster will. However, this is going to be perfectly fine for many scenarios. Think about your daily drive to work. While you’d like to have a fast sports car for the drive in, it isn’t necessary for that particular need (maybe that’s a bad example — I’m still trying to convince my wife of that one). With pureScale, there are cases where availability is the predominant motivator for using it and there might not be a need to drive through massive amounts of transactions per second or scale up to tens of nodes. Your performance and scalability needs will dictate whether RDMA is required or not for your environment. By the way, you might see this feature referred to as pureScale “lite”. I’m still slowly warming up to that term, but the important thing is people know that “lite” doesn’t imply lower levels of availability.

With the ability to do this TCP/IP sockets-based communication between nodes, it also opens up more virtualization options. For example DB2 pureScale can be implemented using TCP/IP sockets in both VMware (Linux) and KVM (Linux) on Intel, as well as in AIX LPARs on Power boxes. These virtualized environments provide a lower cost of entry and are perfect for development, production environments with moderate workloads, QA, or just getting yourself some hands-on experience with pureScale.

It’s also worth pointing out that DB2 pureScale now supports and is optimized for IBM’s new POWER8 platform.

Having all of these new deployment options changes the economics of continuous availability, allowing broad infrastructure choices at every price point.

One thing that all of this should show you is the continued focus and investment in the DB2 pureScale technology by IBM research and development. With all of the press and fanfare around BLU, people often ask me if this is at the expense of IBM’s other technologies such as pureScale. You can see that this is definitely not the case. In fact, if you happen to be at Insight 2014 (formerly known as IOD) in Las Vegas in October, or at IDUG EMEA in Prague in November, I’ll be giving a presentation on everything new for pureScale in DB2 10.5, up to and including the “Cancun Release”. It’s an impressive amount of features that’s hard to squeeze into an hour. :-)

For more information on what’s new for pureScale and DB2 in general with this new release, check out the fix pack summary page in the DB2 Information Center.

Is Your Database a Hero or a Hindrance?

KellySchlamb Kelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

Here’s a big question for you – Is your database a hero or a hindrance? In other words, is your database environment one that’s helping your organization meet your performance, scalability, and availability needs or is it holding you back from meeting your SLAs and keeping up with ever changing business needs?

Join me for an Information Week webinar on this topic next week — Thursday, September 4th at 12pm EDT — where I’ll be talking about these types of challenges faced by IT organizations and how DB2 has the capabilities to address those challenges.  News about some of these capabilities will be hot off the press and so you won’t want to miss it.

Click here to register

Webcast(Hero)-LIVE

Achieving High Availability with PureData System for Transactions

KellySchlamb

Kelly Schlamb , DB2 pureScale and PureData Systems Specialist, IBM

A short time ago, I wrote about improving IT productivity with IBM PureData System for Transactions and I mentioned a couple of new white papers and solution briefs on that topic.  Today, I’d like to highlight another one of these new papers: Achieving high availability with PureData System for Transactions.

I’ve recently been meeting with a lot of different companies and organizations to talk about DB2 pureScale and PureData System for Transactions, and while there’s a lot of interest and discussion around performance and scalability, the primary reason that I’m usually there is to talk about high availability and how they can achieve higher levels than what they’re seeing today. One thing I’m finding is that there are a lot of different interpretations of what high availability means (and I’m not going to argue here over what the correct definition is). To some, it’s simply a matter of what happens when some sort of localized unplanned outage occurs, like a failure of their production server or a component of that server. How can downtime be minimized in that case?  Others extend this discussion out to include planned outages, such as maintenance operations or adding more capacity into the system. And others will include disaster recovery under the high availability umbrella as well (while many keep them as distinctly separate topics — but that’s just semantics). It’s not enough that they’re protected in the event of some sort of hardware component failure for their production system, but what would happen if the entire data center was to experience an outage? Finally (and I don’t mean to imply that this is an exhaustive list — when it comes to keeping the business available and running, there may be other things that come into the equation as well), availability could also include a discussion on performance. There is typically an expectation of performance and response time associated with transactions, especially those that are being executed on behalf of customers, users, and business processes. If a customer clicks on button on a website and it doesn’t come back quickly, it may not be distinguishable from an outage and the customer may leave that site, choosing to go to a competitor instead.

It should be pointed out that not every database requires the highest levels of availability. It might not be a big deal to an organization if a particular departmental database is offline for 20 minutes, or an hour, or even the entire day. But there are certainly some business-critical databases that are considered “tier 1″ that do require the highest availability possible. Therefore, it is important to understand the availability requirements that your organization has.  But I’m likely already preaching to the choir here and you’re reading this because you do have a need and you understand the ramifications to your business if these needs aren’t met. With respect to the companies I’ve been meeting with, just hearing about what kinds of systems they depend on from both an internal and external perspective- and what it means to them if there’s an interruption in service- has been fascinating.  Of course, I’m sympathetic to their plight, but as a consumer and a user I still have very high expectations around service. I get pretty mad when I can’t make an online trade, check the status of my travel reward accounts, or even order a pizza online ; especially when I know what those companies could be doing to provide better availability to their users.  :-)

Those things I mentioned above — high availability, disaster recovery, and performance (through autonomics) — are all discussed as part of the paper in the context of PureData System for Transactions. PureData System for Transactions is a reliable and resilient expert integrated system designed for high availability, high throughput online transaction processing (OLTP). It has built-in redundancies to continue operating in the event of a component failure, disaster recovery capabilities to handle complete system unavailability, and autonomic features to dynamically manage utilization and performance of the system. Redundancies include power, compute nodes, storage, and networking (including the switches and adapters). In the case of a component failure, a redundant component keeps the system available. And if there is some sort of data center outage (planned or unplanned), a standby system at another site can take over for the downed system. This can be accomplished via DB2’s HADR feature (remember that DB2 pureScale is the database environment within the system) or through replication technology such as Q Replication or Change Data Capture (CDC), part of IBM InfoSphere Data Replication (IIDR).

Just a reminder that the IDUG North America 2014 conference will be taking place in Phoenix next month from May 12-16. Being in a city that just got snowed on this morning, I’m very much looking forward to some hot weather for a change. Various DB2, pureScale, and PureData topics are on the agenda. And since I’m not above giving myself a shameless plug, come by and see me at my session: A DB2 DBA’s Guide to pureScale (session G05). Click here for more details on the conference. Also, check out Melanie Stopfer’s article on IDUG.  Hope to see you there!

Improve IT Productivity with IBM PureData System for Transactions

KellySchlamb

Kelly Schlamb , DB2 pureScale and PureData Systems Specialist, IBM
I’m a command line kind of guy, always have been. When I’m loading a presentation or a spreadsheet on my laptop, I don’t open the application or the file explorer and work my way through it to find the file in question and double click the icon to open it. Instead, I open a command line window (one of the few icons on my desktop), navigate to the directory I know where the file is (or will do a command line file search to find it) and I’ll execute/open the file directly from there. When up in front of a crowd, I can see the occasional look of wonder at that, and while I’d like to think it’s them thinking “wow, he’s really going deep there… very impressive skills”, in reality it’s probably more like “what is this caveman thinking… doesn’t he know there are easier, more intuitive ways of accomplishing that?!?”

The same goes for managing and monitoring the systems I’ve been responsible for in the past. Where possible, I’ve used command line interfaces, I’ve written scripts, and I’ve visually pored through raw data to investigate problems. But inevitably I’d end up doing something wrong, like miss a step, do something out of order, or miss some important output – leaving things not working or not performing as expected. Over the years, I’ve considered that part of the fun and challenge of the job. How do I fix this problem? But nowadays, I don’t find it so fun. In fact, I find it extremely frustrating.Things have gotten more complex and there are more demands on my time. I have much more important things to do than figure out why the latest piece of software isn’t interacting with the hardware or other software on my system in a way it is supposed to. When I try to do things on my own now, any problem is immediately met with an “argh!” followed by a google search hoping to find others who are trying to do what I’m doing and have a solution for it.

When I look at enterprise-class systems today, there’s just no way that some of the old techniques of implementation, configuration, tuning, and maintenance are going to be effective. Systems are getting larger and more complex. Can anybody tell me that they enjoy installing fix packs from a command line or ensuring that all of the software levels are at exactly the right level before proceeding with an installation of some modern piece of software (or multiple pieces that all need to work together, which is fairly typical today)? Or feel extremely confident in getting it all right? And you’ve all heard about the demands placed on IT today by “Big Data”. Most DBAs, system administrators, and other IT staff are just struggling to keep the current systems functioning, not able to give much thought to implementing new projects to handle the onslaught of all this new information. The thought of bringing a new application and database up, especially one that requires high availability and/or scalability, is pretty daunting. As is the work to grow out such a system when more demands are placed on it.

It’s for these reasons and others that IBM introduced PureSystems. Specifically, I’d like to talk here about IBM PureData System for Transactions. It’s an Expert Integrated System that is designed to ensure that the database environment is highly available, scalable, and flexible to meet today’s and tomorrow’s online transaction processing demands. These systems are a complete package and they include the hardware, storage, networking, operating system, database management software, cluster management software, and the tools. It is all pre-integrated, pre-configured, and pre-tested. If you’ve ever tried to manually stand up a new system, including all of the networking stuff that goes into a clustered database environment, you’ll greatly appreciate the simplicity that this brings.

The system is also optimized for transaction processing workloads, having been built to capture and automate what experts do when deploying, managing, monitoring, and maintaining these types of systems. System administration and maintenance is all done through an integrated systems console, which simplifies a lot of the operational work that system administrators and database administrators need to do on a day-to-day basis. What? Didn’t I just say above that I don’t like GUIs? No, I didn’t quite say that. Yeah, I still like those opportunities for hands-on, low-level interactions with a system, but it’s hard not to appreciate something that is going to streamline everything I need to do to manage a system and at the same time keep my “argh” moments down to a minimum. The fact that I can deploy a DB2 pureScale cluster within the system in about an hour and deploy a database in minutes (which, by the way, also automatically sets it up for performance monitoring) with just a few clicks is enough to make me love my mouse.

IBM has recently released some white papers and solution briefs around this system and a couple of them talk to these same points that I mentioned above. To see how the system can improve your productivity and efficiency, allowing your organization to focus on the more important matters at hand, I suggest you give them a read:

Improve IT productivity with IBM PureData System for Transactions solution brief
Four strategies to improve IT staff productivity white paper

The four strategies as described in these papers, that talk to the capabilities of PureData System for Transactions, are:

  • Simplify and accelerate deployment of high availability clusters and databases
  • Streamline systems management
  • Reduce maintenance time and risk
  • Scale capacity without incurring downtime

I suspect that I won’t be changing my command line and management/maintenance habits on my laptop and PCs any time soon, but when it comes to this system, I’m very happy to come out of my cave.

New to IBM PureData System for Transactions: DB2 10.5 and HADR

KellySchlamb

Kelly Schlamb, DB2 pureScale and PureData Systems Specialist, IBM

In a comment on my previous blog post, somebody recently asked about when the PureData System for Transactions was going to be updated to include DB2 10.5, the latest and greatest version of DB2 that was released on June 14th, 2013.  At the time, I hinted that it would be coming soon but I couldn’t share any details.  The curtain can now been lifted.

PureData System for Transactions Fix pack 3 was made available for download on July 31st (and any new deployments of the system will automatically have this level as well).  This fix pack adds DB2 10.5 to the software stack of the system.  So, when you go to deploy a cluster you can now choose to deploy either DB2 10.1 or 10.5, depending on your needs.

As with every major release of DB2, this new version is jam-packed with countless features and enhancements.  There’s a lot of great information out there about 10.5 if you’re interested in reading more, including some entries from fellow blogger, Bill Cole and the What’s New section in the Information Center.

While many things will be of general interest to people – such as performance enhancements, new SQL capabilities, and further Oracle compatibility updates – I did want to specifically call out something that will be of great interest to those interested in the DB2 pureScale feature and PureData System for Transactions (which has pureScale built into it).  This is the addition of HADR support.  HADR is DB2’s High Availability / Disaster Recovery feature.  With HADR, changes to a primary database are replicated via transaction log shipping to one or more standbys, allowing for both local high availability and longer distance disaster recovery.  There are many reasons why DB2 users have embraced HADR, but the one I hear all the time is that it’s built right into DB2 which makes it very easy to setup and maintain.

In the case of a pureScale environment, you’re already getting the highest levels of availability.  For instance, with pureScale in the PureData System for Transactions box, a compute node failure wouldn’t result in the database going offline.  In this case, other compute nodes associated with the cluster would still be available to process database transactions.  So, with HA already being accounted for, the value in using HADR in this type of environment is for setting up a disaster recovery system.  Now, you do have other options for disaster recovery of PureData System for Transactions, such as using QReplication, and there is functionality within QRep that might make it a more suitable choice for a particular database (such as only having a need to replicate a partial set of the data in the database).  But with HADR you have another option and those that use it and love it today for their traditional DB2 environments, can now use it here as well.  For example, you can have a PureData System for Transactions at your primary site and another one at your disaster recovery site.  HADR is enabled at the database level and for one or more databases on the primary system, you can create a standby version at the other site.  If there is ever a need to failover to that standby site, it’s a relatively simple process to perform the takeover of the databases on the standby.

Rather than getting into the specifics about how this works, how to configure the environment, etc. I’m going to take the easy route and point you to this comprehensive document on this topic.

Just learning about the product? Check out the PureData for Transactions page .

IBM PureData System for Transactions

KellySchlamb
Kelly Schlamb, DB2 pureScale Specialist, IBM

By now, I’m hoping that all of you have had the opportunity to hear or see something about IBM’s latest addition to the PureSystems family – the PureData System.  Generally available in October 2012, this new system comes in three different models that have been designed and optimized to handle the different types of workloads that your organization typically needs to run: transactional analytics, and operational analytics.

If you have seen something on PureData before now, you have likely seen the following things mentioned:  built-in expertise, integration by design, and simplified experience. There’s a lot of great information out there talking to these points (and I’ll provide some links below) so I won’t spend too much time on them. However, a key take away is that these systems are “simple”.  That means simple to order, setup, configure, manage, upgrade, etc. These systems are hardware/software integrated, shipped in the rack, fully assembled and they just need to be powered on and connected into your corporate network.  Deployment is hours, not days.

Having a DB2 pureScale background, the system I’m personally the most interested in is the PureData System for Transactions… a platform that provides transactional data services, with DB2 pureScale at the heart of it.  For those of you who aren’t aware, pureScale is the DB2 feature that provides extreme scalability and availability in a shared data environment.  It could just be my background, but I think it’s very cool technology.

In a previous role I was actually one of the pureScale developers and development managers, but in my current role I’m now working directly with clients who are in the process of bringing pureScale into their IT infrastructure.  This has given me a unique opportunity to both work with this technology at a low bits and bytes level, but then also see first hand how clients are using it to power their core business applications.  And now with the new PureData System for Transactions, I’m really excited about how it is going to open up further opportunities for our clients who need this type of extreme availability, but perhaps were a bit hesitant to take on the task of standing up a pureScale system themselves.

The PureData System is designed to significantly improve the time to value for new application deployments because the system is already pre-configured and tuned for OLTP workloads. I was recently at the Information on Demand 2012 conference in Las Vegas and there was a lot of buzz around all of this. I had a lot of conversations with folks who were very excited about getting their hands on one of these new boxes.  They told me that more and more of their applications are becoming business critical and high availability has become a necessity.

For more information on the PureData System for Transactions, check out the following pages and videos:

IBM PureData System for Transactions

IBM PureData Systems

IBM PureData System for Transactions Tour with Tim Vincent :

IBM PureData System for Transactions :


Before signing off, I just wanted to quickly share a quote I saw in an article – just to highlight the kinds of things that analysts are saying. The analyst here says that the new solutions will enforce Big Blue’s current market and thought-leadership position: “That isn’t a bad thing, unless you happen to be one of the myriad companies traveling in IBM’s wake… But over time, we expect PureData and future IBM solutions to inspire what amounts to a template for what enterprises will come to expect from transaction processing and business analytics solutions.”

Follow

Get every new post delivered to your Inbox.

Join 36 other followers