Simplifying Oracle Database Migrations

Danny Arnold

Danny Arnold ,  Worldwide Competitive Enablement Team

As part of my role in IBM Information Management, as a technical advocate for our DB2 for LUW(Linux , Unix, Windows) product set, I often enter into discussions with clients that are currently using Oracle Database.

With the unique technologies delivered in the DB2 10 releases (10.1 and 10.5), such as

  • temporal tables to allow queries against data at a specific point-in-time,
  • row and column access control (RCAC) to provide granular row and column level security that extends the traditional RDBMS table privileges for additional data security, pureScale for near continuous availability database clusters,
  • database partitioning feature (DPF) for parallel query processing against large data sets (100s of TBs), and
  • the revolutionary new BLU Acceleration technology to allow analytic workloads to use column-organized tables to deliver performance orders of magnitude faster than conventional row-organized tables,

many clients like the capabilities and technology that DB2 for LUW provides.

However, a key concern is the level of effort to migrate an existing Oracle Database environment to DB2 .  Although DB2  provides Oracle compatibility and has had this capability built into the database engine since the DB2 9.7 release, there is still confusion on the part of clients as to what this Oracle compatibility means in terms of a migration effort.  Today, DB2 provides a native Oracle PL/SQL procedural language compiler, support for Oracle specific ANSII SQL language extensions, Oracle SQL functions, and Oracle specific data types (such as NUMBER and VARCHAR2).  This compatibility layer within DB2 allows many Oracle Database environments to be migrated to DB2 with minimal effort. Many stored procedures and application SQL that are used against Oracle Database can run unchanged against DB2 reducing both the migration effort and migration risk, as the application did not have to be modified. So the testing phase is much less effort than for a changed or heavily modified application SQL and stored procedures. Although the migration effort seems relatively straight forward, there are still questions that come up with clients and there is the need for a clear explanation of the Oracle Database to DB2 migration process.

Recently, a new solution brief entitled “Simplify your Oracle database migrations” published by IBM Data Management , provides a clear explanation of how DB2 and the PureData for Transactions appliance built upon DB2 pureScale can deliver a clustered database environment for migrating an Oracle database to DB2.  This brief provides a clear and concise overview of what an Oracle to DB2 migration requires and the assistance and tooling available from IBM to make a migration straightforward for a client’s environment.  The brief provides a concise description of the IBM tooling, IBM Database Conversion Workbench, which is available to assist a client in moving their tables, stored procedures, and data from Oracle to DB2.

The fact that DB2 for LUW makes migrating from Oracle a task that takes minimal effort, due to the Oracle compatibility built into DB2, is complemented by the PureData for Transactions system. PureData for Transactions provides an integrated, pre-built DB2 pureScale environment that allows a pureScale instance and a DB2 clustered database to be ready for use in a matter of hours. This helps simplify the implementation and configuration experience for the client. Combining the ease of Oracle migration to DB2 with the rapid implementation and configuration possible with PureData for Transactions, provides a winning combination for a client looking for a more cost effective and available alternative to the Oracle Database.

Achieving High Availability with PureData System for Transactions

KellySchlamb

Kelly Schlamb , DB2 pureScale and PureData Systems Specialist, IBM

A short time ago, I wrote about improving IT productivity with IBM PureData System for Transactions and I mentioned a couple of new white papers and solution briefs on that topic.  Today, I’d like to highlight another one of these new papers: Achieving high availability with PureData System for Transactions.

I’ve recently been meeting with a lot of different companies and organizations to talk about DB2 pureScale and PureData System for Transactions, and while there’s a lot of interest and discussion around performance and scalability, the primary reason that I’m usually there is to talk about high availability and how they can achieve higher levels than what they’re seeing today. One thing I’m finding is that there are a lot of different interpretations of what high availability means (and I’m not going to argue here over what the correct definition is). To some, it’s simply a matter of what happens when some sort of localized unplanned outage occurs, like a failure of their production server or a component of that server. How can downtime be minimized in that case?  Others extend this discussion out to include planned outages, such as maintenance operations or adding more capacity into the system. And others will include disaster recovery under the high availability umbrella as well (while many keep them as distinctly separate topics — but that’s just semantics). It’s not enough that they’re protected in the event of some sort of hardware component failure for their production system, but what would happen if the entire data center was to experience an outage? Finally (and I don’t mean to imply that this is an exhaustive list — when it comes to keeping the business available and running, there may be other things that come into the equation as well), availability could also include a discussion on performance. There is typically an expectation of performance and response time associated with transactions, especially those that are being executed on behalf of customers, users, and business processes. If a customer clicks on button on a website and it doesn’t come back quickly, it may not be distinguishable from an outage and the customer may leave that site, choosing to go to a competitor instead.

It should be pointed out that not every database requires the highest levels of availability. It might not be a big deal to an organization if a particular departmental database is offline for 20 minutes, or an hour, or even the entire day. But there are certainly some business-critical databases that are considered “tier 1” that do require the highest availability possible. Therefore, it is important to understand the availability requirements that your organization has.  But I’m likely already preaching to the choir here and you’re reading this because you do have a need and you understand the ramifications to your business if these needs aren’t met. With respect to the companies I’ve been meeting with, just hearing about what kinds of systems they depend on from both an internal and external perspective- and what it means to them if there’s an interruption in service- has been fascinating.  Of course, I’m sympathetic to their plight, but as a consumer and a user I still have very high expectations around service. I get pretty mad when I can’t make an online trade, check the status of my travel reward accounts, or even order a pizza online ; especially when I know what those companies could be doing to provide better availability to their users.  🙂

Those things I mentioned above — high availability, disaster recovery, and performance (through autonomics) — are all discussed as part of the paper in the context of PureData System for Transactions. PureData System for Transactions is a reliable and resilient expert integrated system designed for high availability, high throughput online transaction processing (OLTP). It has built-in redundancies to continue operating in the event of a component failure, disaster recovery capabilities to handle complete system unavailability, and autonomic features to dynamically manage utilization and performance of the system. Redundancies include power, compute nodes, storage, and networking (including the switches and adapters). In the case of a component failure, a redundant component keeps the system available. And if there is some sort of data center outage (planned or unplanned), a standby system at another site can take over for the downed system. This can be accomplished via DB2’s HADR feature (remember that DB2 pureScale is the database environment within the system) or through replication technology such as Q Replication or Change Data Capture (CDC), part of IBM InfoSphere Data Replication (IIDR).

Just a reminder that the IDUG North America 2014 conference will be taking place in Phoenix next month from May 12-16. Being in a city that just got snowed on this morning, I’m very much looking forward to some hot weather for a change. Various DB2, pureScale, and PureData topics are on the agenda. And since I’m not above giving myself a shameless plug, come by and see me at my session: A DB2 DBA’s Guide to pureScale (session G05). Click here for more details on the conference. Also, check out Melanie Stopfer’s article on IDUG.  Hope to see you there!

DB2 with BLU Acceleration and Intel – A great partnership!

Allen Wei

Allen Wei, DB2 Warehousing Technology, System Verification Test, IBM

DB2 with BLU Acceleration is a state of art columnar store RDBMS (Relational Database Management System) master piece that combines and exploits some of the best technologies from IBM and Intel. In the video linked below, there is mention of an 88x speedup when compared with the previous generation of row store RDBMS on the exact same workload. That announcement was made during IBM IOD in November 2013.

Guess what? In a test done a few days ago (less than 3 months after the video was filmed), the speedup, again comparing DB2 with BLU Acceleration with row store RDBMS using the exact same workload on new Intel Xeon IVY-EX based hardware, is now 148x. Really? Need I say more? This shows that not only is DB2 with BLU Acceleration equipped with innovative technologies, but it also combines the exact set of technologies from both RDBMS and hardware advancement that you really need. This helps BLU Acceleration to fully exploit hardware capacities to the extreme and to give you the best ROI (Return on Investment) that every CTO dreams about.

You might start wondering if this is too good to be true. I have shown you the numbers. So, no, it is the truth. You might want to ask, even if this is true, is it complicated? Well, it does take discipline, innovative thinking and effort to offer technologies like this. However, my answer is again a No. It’s completely the opposite! In fact, As Seen on TV (the video clip), it’s as simple as – create your tables; load your data and voila!! Start using your data. There is no need for extensive performance tuning, mind-boggling query optimization or blood-boiling index creation. Leave these tasks to DB2 with BLU Acceleration.  Can’t wait to try for yourself? It really is that fast and simple.

Do you need to hear more before you are 100% convinced? Let me begin by recalling a few key disruptive technologies that are built into DB2 with BLU Acceleration. This is mentioned in the video clip as well, and I will prove to you that we are not listing for the sake of listing them.

What state of the art technology was built into DB2 with BLU Acceleration that makes it so great? Here is a summary of what you saw on in the video clip:

# Dynamic In-Memory Technology – loads terabytes of data into random access memory instead of hard disks, streamlining query workloads even when data sets exceed the size of the memory.

  • This allows the CPU to operate efficiently without waiting on the disk I/O operations
  • In my case, for one instance, I could fit 2TB database into 256GB RAM or 1/8 of the database size
  • I could also fit a 10TB database into 1TB RAM or 1/10 of the database size, in another test.

# Actionable Compression – Deep data compression and perform actions directly on uncompressed data

  • Deep compression
  • I noticed a storage space consumption that was 2.8x – 4.6x smaller than corresponding row-store database, depending on the size of  the database
  • Data can be accessed as is in the compressed form, no decompression needed
  • CPU can dedicate power to query processing not on decompression algorithms.

#  Parallel Vector Processing – Fully utilize available CPU cores

  • Vector is processed more efficiently hence there’s an increase in CPU efficiency
  • All CPU cores are fully exploited.

#  Data Skipping – Jump directly to where the data is

  • We do not need to process irrelevant data.

Have  you been convinced, yet? I know you have. However, you don’t need to just take my word for it. Try it. The time you spent on reading the blog and trying to find a loophole is enough to give yourself a high performance database from scratch. Period.

Read more about the Intel and BLU Acceleration partnership here : DB2 BLU Acceleration on Intel Xeon E7v2 Solutions Brief 

Allen Wei joined IBM as a developer for the BI OLAP product line, including OLAP Miner. He was a key member of the Infosphere product line, and has lead Infosphere Warehouse and DB2 LUW SVTs. Currently  he focuses on tuning the performance of BLU Acceleration, mainly w.r.t the Intel partnership.

Visit the IBM BLU HUB to learn more about the next gen in-memory database technology!

Also checkout this post on the Intel Blog about how IBM and Intel have been working together to extract big data insights.

Introducing the IBM BLU Acceleration Hub

John Park-pic

John Park , Product Manager – DB2, BLU Acceleration for Cloud.

Hemingway once wrote “There is nothing to writing. All you do is sit down at the typerwriter and bleed” — he also wrote — “The best way to find out if you trust someone is to trust them.”

So when Susan Visser “pinged” me on IBM’s venerable Sametime system asking me to blog for the launch of ibmbluhub.com my immediate response was “I don’t blog, I only know how to post pictures of my pets and kid to facebook” she responded, “It’s easy, trust me”. Hence the quotes.

So here I am, and who am I? Well, my name is John Park. I am an IBM’er, I am a DB2 Product Manager and as of today, I am a blogger (?).

My IBM life has revolved around the DB2 and the analytics space, starting off as a developer building the engn_sqm component (think snapshot, event monitor and governor) – so if your stuff is broke – its probably my fault.

Then I moved into the honorable realm of product management, leading the charge on products such as Smart Analytics Systems, PureData for Operational Analytics and now BLU Acceleration for Cloud … which is why I guess I’m here.

On a personal note, I like to build stuff – specifically I like to build cool stuff, and BLU Acceleration is freak’in cool. When I think about the significance of this technology I recollect back to fixing Version 7 of DB2, building V8 and my last piece of code in v9.5. All along the way the DB2 team building features and products which helped our customers and our users, use DB2.

Personally, I see BLU as a convergence point, the pinnacle of where all the years of engineering and thought leadership have finally come to “eureka”.  Let me guide you in my thinking …

Autonomic features such as Automatic Maintenance, Self Tuning Memory Management and Automatic Workload Management were all incremental steps along DB2 Version releases, each fixed a problem the DB2 user had and improved the usage of DB2

DB2’s compression story started with row compression, index compression, then went to adaptive row compression and now to actionable compression and with each compression story a better value proposition to the DB2 user.  (Note that the word compression is used is 6 times!)

DB2’s performance optimization journey went from database partitioning and MPP to table and range partitioning, continuous ingest, multi-temp and workload management, making DB2 a leader in performance across all workloads.

Usability in its simplest form, value driven compression and unprecedented performance, are the three key tenets to the development of BLU. These features improved DB2 incrementally between versions, and as the product incrementally grew, our engineers experience and creativity expanded. With BLU we see these features and the knowledge gained from developing these features transform and support the simplest statement I have ever seen in enterprise software – “Create, Load and GO”. Simply amazing.

Welcome to the world, “ibmbluhub.com”, for the readers, enjoy the ride, this is the first step in a new direction for data management and technology. I haven’t even talked about BLU Acceleration for Cloud ….

Until then, here is a picture of my cat.

John Park

DBaaS Explained, or Miracles in Minutes

Bill Cole

Bill Cole – Competitive Sales Specialist,Information Management, IBM


Don’t you love installing databases?  I man the whole stack from the database software through the tools and applications.  Lots of real innovation there, eh?  How many times have you done this sort of thing?  My friend Don was sent to install some very complex software for a client.  The scope was one (count ‘em, one) environment.  So Don calls me the first day he’s there and says the project wants – and this is absolutely true – seventeen copies of the environment.  As an MIS guy, he was offended.  What in the world did they need with so many copies of the environment?  Turns out that every developer wanted his/her own copy.  Given that the install process for any single environment was two weeks and Don had three weeks, all those copies weren’t happening.  Don politely explained the situation, including the fact that the disk space available would barely accommodate a single installation.  Disappointment and relief.  Guess who experienced which emotion.

I’ve written on this topic before in relationship to patterns.  This time I’d like to talk about an old or new concept depending on who you ask: Database as a Service.  Frankly, the naming is unfortunate since the concept is about providing a complete environment for an application, not just the database.  Being a DBA, the database is the most important part, of course.  LOL.

During my tenure as a data center director, I thought of my infrastructure team as providing a service not only to the IT department but to the rest of the company as well.  The real trick there was getting my team – and the CIO – to understand that.  We’re IT after all.  Miracles are our stock in trade.  It’s what we did every day.  The application folks came to rely on our miracles, too.

As we built the data center – I had the luxury of starting from scratch – we agreed on the number of environments and their specific uses, etc..  This lasted for almost two weeks once we actually got started.  From the target of the three we had agreed to, my team was challenged to create and manage seven, each with their own rules, users, priorities, sources and schedules.  The application team didn’t think we could do it.  I created the processes and procedures.  And I participated in them, too.  Shared misery.  LOL.  The apps folks would challenge us to do something in a hurry and I’d quote a short turn-around and then beat it by 50% most of the time.  It both annoyed and amazed.

This was our own DBaaS but we should first define the term.  DBaaS is initiated by a user asking for an application environment to be provisioned.  Note it’s the full application environment including the database, application code, any web- or form-based pieces, plus the ancillary tools.  Crucially, the request includes three other critical pieces of information.  First is the sizing.  Is this a small, medium, large or massive environment?  We need to understand how the environment will be used.  Is this a sandbox or training or development or Production implementation?  Some other bits of information need to be collected as well.  Which department is getting billed for this?  After all, IT didn’t just collective decide to create this environment.  Perhaps we should add something about managing this collection.  Are we applying patches or updates?  What’s the life-span?  A few days or weeks or years?  SLA?  It’s not simply gimme a copy of Prod.  We need a complete description.

Note the assumptions we make in DBaaS.  First, we assume there’s a “golden” or benchmark source from which to create the environment.  If not, it’s a whole new installation and the price just went up.  Second, that we have server resources available, probably virtualized servers, too!  And that provisioning this new environment won’t step on any existing environments.  Third, that there’s disk space available.  Again, this is space that won’t step on performance of any other environment.  I add this caveat because I once had a CIO call me out of the blue and ask me about the rules for putting data on his expensive SAN.  It’s seems all those CYA documents and spreadsheets were killing the performance of critical production database.  I know of a clearly enunciated set of rules, please share them with me.  Finally, we need to ensure network connectivity and bandwidth.  Don’t forget this critical piece of the pie.  We can’t assume that there’s room in the pipe for an added load.

The pledge we in IT make is both timescale and accuracy.  We give the requestor a time for completion that may be tomorrow or, at worst, the following day.  I know that sounds wildly aggressive for some types of provisioning but we have to build our sources and processes so we can install them quickly on a variety of operating environments.

All of the above is where DB2, SoftLayer and PureSystems shine brightly.  DB2 can be provisioned quickly from copies in a number of ways from cold or warm backups.  PureSystems implements all the advantages of DB2 plus PureApps includes DBaaS tools.  And SoftLayer is really the gold standard for DBaaS.  Just take a look.

Finally, my wife works for a large application development organization masquerading as a bank.  She gets very frustrated with environment management and provisioning.  Fortunately, I understand both sides of that story so I sit and listen patiently while she tells me what the bloody minded idiots who manage their various dev/test/prod environments have done to make her life difficult today.  The thing that gets her maddest is that no one seems to know how to provision any new environments.  With dozens of infrastructure staff, they can’t because no one has really haven’t thought through the process.  So the dev teams just sit around waiting.  Losing precious time.  It’s really not a full service bank, I guess.  Don’t expect a miracle.

Follow Bill Cole on Twitter : @billcole_ibm

Tales from the Chipset

Bill Cole

Bill Cole – Competitive Sales Specialist,Information Management, IBM

My history in computing is littered with different roles.  Everything from operator to programmer to architect as well as (forgive me) manager and director.  A few lives back, I worked for a company that built the first computer on a single chip.  It was a beast to make it productive outside the lab.  In fact, it was so difficult that I was the first one to make it work in the field.  The funny thing was that the whole configuration process was carried out using (this is absolutely true) staples.  That’s right, staples!  I keep a copy of that chip in my memento cabinet, too.  By now the whole adventure seems a bit surreal.  It was fun, though, and I learned a lot.

That was my first adventure with chipsets.  But not my last.  Later, when I managed a database development team for that company, I had a further education into just how software and hardware work together to build a system that delivers the performance customers need to address business problems and opportunities.

That’s what makes a system special, I think.  It’s not a matter of simply slapping together some software or building a chip and hoping customers will show up.  Sure, you can build vanilla software that works on a chipset but there’s no synergy in that.  Synergy where the system is better than the parts.  That’s what DB2 on Intel is all about.  The IBM and Intel team consists of engineers and developers from both companies work together to optimize the chip and the DB2 engine so that our mutual customers get the fastest return on investment.

So, you ask, how is that different from any other database?  It’s not just different, it’s unique.  Doesn’t a certain red database run on the same hardware?  Yes.  And they use the same code line for Intel platforms that they do for any other bit of hardware.  The code doesn’t know where it’s running and can’t make use of those features that would give their customers the sort of performance that DB2 delivers on the same chipset.

But SQL Server also runs on the chipset.  Ah, yes, that’s true but it, too, is a prisoner of a code line.  It’s not optimized for the chipset; it’s optimized for the operating system.

So what chipset do most Linux installations run on?  I think we all know the answer to that one.  Intel, of course.  SQL Server is out of the picture.  That red database is still running the same code line that runs on Windows and every other environment.  Still no optimization.

I know, whine, whine, whine.  What does this mean to me and my organization?  Simple.  Better performance.  Better return on investment through improved, sustainable and predictable performance.

Talk, talk, talk.  Show me, you say.  Let’s take the easy one first.  Vector instructions.  I’ve written about these in an earlier post and I’ll amplify that now.  These instructions push multiple data streams through the registers with a single instruction that uses a single processor cycle.  This means we’re multiplying MIPS because we don’t have to wait on multiple cycles to process the same data stream.  Said in another way, it’s sort of like doing homework for all your courses simultaneously.  Wouldn’t that have been nice!

Then there’s register width.  DB2 is built to manage compression based on the width of the registers.  Filling a register means more efficient use of the fastest resource in the processor.  That’s exactly what DB2 does.  We know the width of the registers on Intel chips – and Power chips, too – so we make sure we’re filling those registers to get the most out of the instructions.  Again, this is not only efficient and saves disk space, it makes the overall system faster since we don’t have to use as many instructions to compress and de-compress the data.  Easy.

So the vector instructions and registers make DB2 fast and efficient.  What else is there?  Balance, my friends.  I’ve been involved with building and tuning systems for too long to think that the processor is all there is to system performance.  Indeed, it’s processor, memory and I/O that combine to give us the performance we get.  Again, this is where knowing the chipset comes in very handy.  DB2 is written to take advantage of the various caches on the x86 chipset, including the I/O caches.

And we have worked with Intel to understand where the sweet spot is for the combination of processor, memory and I/O.  Standard configurations are pre-tuned to fit within the guidelines we know will give you the best performance.

And then there’s the tools that help us make the most of the chipset.  You may not realize that the release of a chipset includes tools for compiling, profiling and optimizing the software that runs on those chips.  DB2 on Intel has gone through the process of being optimized specifically for Intel on any of the operating systems for which we have distributions.  (I know that was awkward but my old English teachers would haunt me for weeks if I ended a sentence with a preposition!)  Gotta like that.  Seems a great arrangement to me.

Finally, my wife loves IKEA, where all the furniture comes in kits.  We’ve built lots of furniture out of those kits.  And they always include tools that work with that kit.  Taking advantage of a chipset is much the same way.  Use the tools to get most from the hardware.  There’s no sense in buying too much hardware just because your database is blind, eh?

Being the program manager for databases as I mentioned above gave me the opportunity to sit in front of many customers to listen to them tell me about their experiences with my hardware and software, both good and bad.  I carry those conversations with me today.  Your story is the one I want to hear.  Let me know what DB2 is doing for you.

Follow Bill Cole on Twitter : @billcole_ibm

Watch this video on the collaboration between Intel and DB2 with BLU Acceleration!

My Processors Can Beat Up Your Processors!

bill-cole[1]

Bill Cole, Competitive Sales Specialist,Information Management, IBM

I grew up a fan of Formula 1 and the Indianapolis 500. One of the great F1 racers was a British fellow named Sterling Moss.  Look him up.  Won lots of races for Mercedes-Benz and then went back to England and drove only British marquees.  They were underpowered but he still won races.  The other drivers hated it that he was driving a four-cylinder machine while they were driving V-8s and he still passed them.  And he waved and smiled as he went around them!  To paraphrase the song, every car’s a winner and every car’s a loser….

The moral of the story?  Make the most of the equipment you’ve got.  Sterling was a master of that concept.  We have the same issue in computing.  We assume that brute force will give us better performance.  Bigger is always better.  Speed comes from executing instructions faster.  Over-clocking is a wonderful thing.  More processors!!!  Gotta have more processors!  We’re geeks, so our “measuring” against each other is about our computers.  We’re dead certain that more and faster is the answer to the ultimate question of performance and capacity.

Oh really?  I know a red database that uses multiple processors to execute a single query in parallel.  Good idea?  Uh, not if you need to execute more than one query at a time on those processors.  You see, each query in that database believes it’s the only query on the system and should consume all of the resources.  No thought of tuning the workloads on the system, just single queries.  You see the white hair?  Getting the best use of all the processors was an art form with a moving target.

It’s really a matter of using your resources wisely and DB2 10.5 was created from the outset to use the system resources to enhance the performance of the all the workloads on a system.  In the example above, a simple parallel query with a low priority could essentially stop a queue full of high-priority jobs because it seized all the processors and wouldn’t give them back until the query was complete.  Explaining that one gets to be a bit technical.  Not to mention uncomfortable.

DB2 10.5 sizes up the running workloads and allocates resources according to priorities and needs.  Each new workload gets the same scrutiny before it’s added to the mix.  In fact, a low priority workload might even get held momentarily to make sure that the highest priority workloads complete quickly.  It’s that simple.

 And it’s not just processors, folks.  Memory is a valuable resource, too.  If every workload requires all the available, you’re going to have a problem.  So, in the same way as with processors, DB2 10.5 allocates memory to workloads on the basis of need and priority so that workloads complete properly.

That red database I mentioned.  It seems to believe that lots of badly used memory is the cure for everything.  They confuse the terms in-memory database and database in memory.  I’ve seen users pull an entire database into memory and still get lousy performance because the memory is managed badly.  I once re-tuned a Production database’s cache due to memory limitations causing queries to fail.  I dropped the cache size by 75% and didn’t mention it to anyone.  The next day I pointed out that we no longer had memory problems and everything was working well – and performance was what it always had been.

Note that DB2 10.5 does all this for you.  No resource groups or complicated formulas to get wrong.  No additional software to purchase or manage.  It’s all part of the package.  You don’t have to go back and modify your tables or applications.  Just load and go!  Get the benefits immediately.  Nice.

All this speaks to sizing our systems, too.  We inspect the workloads that will run on the system, not just a few queries, and fill out the necessary expenditure request.  We even add a fudge factor, right?  Maybe we add a few extra processors and a bit of extra memory as room for expansion.  But the workloads grow faster than anyone’s predictions and the hardware we ordered with all that extra capacity is going to be strained if we just throw workloads at it and hope for the best.  Making intelligent use of that capacity is our job – and the job of the software we deploy.  We can do the sizing with the confidence that our software has our back.

Finally, my uncle Emil kept a pair of draft horses on his farm long after he stopped farming with them.  He loved his horses and refused to part with these last two.  Other than being very large pets, they were good for one thing: Pulling cars out of the mud.  Folks would get stuck in the mud at the bottom of the hill after a good storm.  Inevitably, there’d be a knock on his door and he’d hitch one of the horses to the car and pull it out.  One night a truck got stuck after a good rain.  Emil hitched both horses to the truck.  The driver protested that horses couldn’t pull his truck out of the mud.  Emil smiled and talked to the horses.  They leaned into their collars and walked the truck out of the mud.  The driver opened his wallet.  Emil waved the man down the road and put the horses away.  Just being a good neighbor.  Getting the most out of our equipment is part of our compact with the business and BLU’s got that going on like no one else.  You’ll be a hero for thinking so far ahead!

Try DB2 10.5 today , and experience the innovative features and benefits that DB2 has to offer!