pureScale at the Beach. – What’s New in the DB2 “Cancun Release”

KellySchlamb Kelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

 

cancun_beachToday, I’m thinking about the beach. We’re heading into the last long weekend of the summer, the weather is supposed to be nice, and later today I’ll be going up to the lake with my family. But that’s not really why the beach is on my mind. Today, the DB2 “Cancun Release” was announced and made available, and as somebody that works extensively with DB2 and pureScale, it’s a pretty exciting day.

I can guarantee you that you that over the next little while, you’re going to be hearing a lot about the various new features and capabilities in the “Cancun Release” (also referred to as Cancun Release 10.5.0.4 or DB2 10.5 FP4). For instance, the new Shadow Tables feature — which exploits DB2 BLU Acceleration — allows for real-time analytics processing and reporting on your transactional database system. Game changing stuff. However, I’m going to leave those discussions up to others or for another time and today I’m going to focus on what’s new for pureScale.

As with any major new release, some things are flashy and exciting, while other things don’t have that same flash but make a real difference in the every day life of a DBA. Examples of the latter in Cancun include the ability to perform online table reorgs and incremental backups (along with support for DB2 Merge Backup) in a pureScale environment, additional Optim Performance Manager (OPM) monitoring metrics and alerts around the use of HADR with pureScale, and being able to take GPFS snapshot backups. All of this leads to improved administration and availability.

There’s a large DB2 pureScale community out there and over the last few years we’ve received a lot of great feedback on the up and running experience. Based on this, various enhancements have been made to provide faster time to value, with the improved ease of use and serviceability of installation, configuration, and updates. This includes improved installation documentation, enhanced prerequisite checking, beefing up some of the more common error and warning messages, improved usability for online fix pack updates, and the ability to perform version upgrades of DB2 members and CFs in parallel.

In my opinion, the biggest news (and yes, the flashiest stuff) is the addition of new deployment options for pureScale. Previously, the implementation of a DB2 pureScale cluster required specialized network adapters — RDMA-capable InfiniBand or RoCE (RDMA over Converged Ethernet) adapter cards. RDMA stands for Remote Direct Memory Access and it allows for direct memory access from one computer into that of another without involving either one’s kernel, so there’s no interrupt handling and no context-switching that takes place as part of sending a message via RDMA (unlike with TCP/IP-based communication). This allows for very high-throughput, low-latency message passing, which DB2 pureScale uniquely exploits for very fast performance and scalability. Great upside, but a downside is the requirement on these adapters and an environment that supports them.

Starting in the DB2 Cancun Release, a regular, commodity TCP/IP-based interconnect can be used instead (often referred to as using “TCP/IP sockets”). What this gives you is an environment that has all of the high availability aspects of an RDMA-based pureScale cluster, but it isn’t necessarily going to perform or scale as well as an RDMA-based cluster will. However, this is going to be perfectly fine for many scenarios. Think about your daily drive to work. While you’d like to have a fast sports car for the drive in, it isn’t necessary for that particular need (maybe that’s a bad example — I’m still trying to convince my wife of that one). With pureScale, there are cases where availability is the predominant motivator for using it and there might not be a need to drive through massive amounts of transactions per second or scale up to tens of nodes. Your performance and scalability needs will dictate whether RDMA is required or not for your environment. By the way, you might see this feature referred to as pureScale “lite”. I’m still slowly warming up to that term, but the important thing is people know that “lite” doesn’t imply lower levels of availability.

With the ability to do this TCP/IP sockets-based communication between nodes, it also opens up more virtualization options. For example DB2 pureScale can be implemented using TCP/IP sockets in both VMware (Linux) and KVM (Linux) on Intel, as well as in AIX LPARs on Power boxes. These virtualized environments provide a lower cost of entry and are perfect for development, production environments with moderate workloads, QA, or just getting yourself some hands-on experience with pureScale.

It’s also worth pointing out that DB2 pureScale now supports and is optimized for IBM’s new POWER8 platform.

Having all of these new deployment options changes the economics of continuous availability, allowing broad infrastructure choices at every price point.

One thing that all of this should show you is the continued focus and investment in the DB2 pureScale technology by IBM research and development. With all of the press and fanfare around BLU, people often ask me if this is at the expense of IBM’s other technologies such as pureScale. You can see that this is definitely not the case. In fact, if you happen to be at Insight 2014 (formerly known as IOD) in Las Vegas in October, or at IDUG EMEA in Prague in November, I’ll be giving a presentation on everything new for pureScale in DB2 10.5, up to and including the “Cancun Release”. It’s an impressive amount of features that’s hard to squeeze into an hour. :-)

For more information on what’s new for pureScale and DB2 in general with this new release, check out the fix pack summary page in the DB2 Information Center.

Is Your Database a Hero or a Hindrance?

KellySchlamb Kelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

Here’s a big question for you – Is your database a hero or a hindrance? In other words, is your database environment one that’s helping your organization meet your performance, scalability, and availability needs or is it holding you back from meeting your SLAs and keeping up with ever changing business needs?

Join me for an Information Week webinar on this topic next week — Thursday, September 4th at 12pm EDT — where I’ll be talking about these types of challenges faced by IT organizations and how DB2 has the capabilities to address those challenges.  News about some of these capabilities will be hot off the press and so you won’t want to miss it.

Click here to register

Webcast(Hero)-LIVE

Steps toward the Future: How IBM DB2 is changing the Game

tori  Tori McClellan
Super Awesome Social Media Intern

 

Welcome to the New Age of database technology!

IBM DB2 with BLU Acceleration changes the game for in-memory computing.  Due the importance of in-memory computing, we created a dedicated website to take you through all the details, references, and more: www.ibmbluhub.com !  This website is in place to help clients and prospects understand what next-gen in-memory computing can do for them and why IBM BLU is the ideal in-memory database to deliver fast answers.

A few examples of how IBM BLU has helped other clients find their ideal balance between speed and quality:

  1. Regulatory reporting is a huge challenge for all banks - Handelsbanken, one of the most profitable banks in the world, is currently doing reports monthly but are expected to do them daily in the near future. DB2 with BLU Acceleration has helped Handelsbanken analysts get the data they need for daily reports via columnar store. Learn more by watching this video: http://bit.ly/1u7urAA
  2.  Deploying DB2 with BLU Acceleration is simple - with only a handful of commands, you can turn on analytics mode, create a new or auto-configure an existing database to make best use of your hardware for analytics, and then load the data. Learn more from this IBM Redbook that introduces the concepts of DB2 with BLU Acceleration from ground up and describe the technologies that work hand-in-hand with BLU Acceleration: Architecting and Deploying IBM DB2 with BLU Acceleration in Your Analytical Environment.
  3.  Get the FACTS and stay current by subscribing to the ibmbluhub.com newsletter.

- IBM DB2 with BLU Acceleration is a revolutionary technology and delivers breakthrough performance improvements for analytic queries by using dynamic in-memory columnar technologies.

- Different from other vendor solutions, BLU Acceleration allows the unified computing of online transaction processing (OLTP) and analytics data inside a single database, therefore, removing barriers and accelerating results for users. With observed hundredfold improvement in query response time, BLU Acceleration provides a simple, fast, and easy-to-use solution for the needs of today’s organizations; quick access to business answers can be used to gain a competitive edge, lower costs, and more.

- Subscribe to the newsletter to continue learning about this hot in-memory database.  You will receive a periodic iNews email, which links to what’s new.  Just click and learn: http://www.ibmbluhub.com/blu-inews/

ToriBlog

If this information suits your needs, be sure to follow @IBM_DB2 on twitter. Get your information as it is being published.

How to Revolutionize Analytics with Next-Generation In-Memory Computing

lesking  by Les King
Director, Big Data, Analytics and Database Solutions – Information Management, Software Group

 

We are now in the era of cognitive analytics. These are analytic processes that provide useful information with a timeliness which qualifies as “speed of thought”. More and more clients are leveraging the next generation of analytic computing to address business challenges which could never be handled before.

To understand this idea, here’s a fun video that explains this theory a little better and gives a real business example of exactly this: What do chicken dinners have to do with IBM?

As another example, just recently a few friends and I were looking for a coffee shop which had both WiFi and a table which was near a working power outlet. We were surprised to discover that a coffee shop in the area was analyzing the information from our mobile devices and was able to let us know that they had what we were looking for. Coffee shops are all over the place, but, that real time analytics and communication with us was what made the difference. The coffee shop doing this real-time analytics ended up getting our business.

What do the two business examples above have in common ? They both require the analysis of large volumes of information and to be able to take action on this information, very quickly. One of the key technologies allowing clients to accomplish this is in-memory computing. Hardware can handle an ever increasing volume of memory and processing power. There have also been amazing strides in the area of data compression. Vendors who provide the ability to analyze data, in memory, while compressed, will have a huge advantage with these analytic workloads.

An example of this would be IBM’s DB2 with BLU Acceleration. DB2 with BLU Acceleration provides an average of 10X ( 90% ) compression rates. This means 1 TB of data can be stored in about 100 GB of space. DB2 with BLU Acceleration stores data in memory in its compressed form, using less memory to store vast amounts of business data. More importantly, DB2 with BLU Acceleration can analyze this data while compressed. This combination of capabilities positions DB2 with BLU Acceleration as a key technology in the era of big data and cognitive analytics.

When you consider the business examples above, you can see the competitive advantage these companies will have. These next generation analytic infrastructures, which leverage in-memory computing, will allow these companies to grow their business and take clients from their competitors.

To hear another example of how this modernization of a company’s analytic infrastructure is helping solve real world business challenges, check out this upcoming webinar “How to Revolutionize Analytics with Next-Generation In-Memory Computing“, taking place on Sept 25 at 12:00 EDT .

Webcast-(InMemory)REVAMP

Tweetchat on Fraud Prevention in Banking

RadhaBy Radha Gowda
Product Marketing Manager, DB2 and related offerings

On August 7 ‘14, at 11 AM EDT, IBM Data Management team is privileged to have Robert L. Palmer, James Kobielus, and Wilson Davis join us on a tweetchat to share their expertise on #FraudPrevention in Banking.  Some topics that we shall be soliciting your opinion(s) on are:

  • Q1: Are fraudulent activities in banking increasing or decreasing? Why?
  • Q2: What are some key business impacts of fraud?
  • Q3: What measures can be taken to identify potential fraudulent transactions?
  • Q4: What analytics do you need to detect fraud?
  • Q5: What data sources can contribute to the analytics?
  • Q6: How can your systems analyze transactions as they occur?
  • Q7: How can new technologies such as in-memory analytics help in fraud detection?
  • Q8: Where can I learn more?

Here’s what you need to do to join our conversation to contribute or just listen:

  • Go to twubs.com or tweetdeck.com
  • Sign in with your twitter handle
  • Search on #FraudPrevention
  • A new window will open that makes it easy for you to follow and contribute.

If you plan to contribute to our tweetchat, please review the tips at slideshare since the chat can be very fast paced. Suggested resources relevant to the topic include:

  1. How to Mitigate Fraud and Cyber Threats with Big Data and Analytics
  2. IBM data management for banking
  3. Best practices to deploy IBM Banking Data Warehouse model to #IBMBLU for production
  4. Attract and retain customers with always-on digital mobile banking services
  5. Fight against fraud in real-time and save on operating expenses
  6. Customize offers to your clients with the data already at your fingertips
  7. World’s top 5 most secure bank is becoming more strategic and more profitable
  8. Regulatory reporting headaches? See how @Handelsbanken solved their reporting challenges

Tweetchat-AugustMore about our panelists:

Robert L. Palmer (@bigdatabusinessGlobal Banking Industry Marketing, Big Data, IBM

Bob’s expertise is applying B2B software to optimize key business processes.  He is a subject matter expert in financial services, and writes about business challenges, Big Data, analytics, CRM, cognitive computing, and information management.

James Kobielus   (@jameskobielus) Senior Program Director, Big Data Analytics, IBM

James is a popular speaker and thought leader in big data, Hadoop, enterprise data warehousing, advanced analytics, business intelligence, data management and next best action technologies.

Wilson Davis (@wilsondavisibm) Executive Technical Consultant – Counter Fraud iCoC, IBM

Wilson’s specialties include financial and operational data analytics, counter-fraud and anti-money laundering, straight-through-processing, and game changing improvements in business processes and application systems for the financial services industry.

The data advantage: Creating value into today’s digital world

IBM Institute for Business Value is looking to understand how organizations around the globe are creating business value from analytics. If you can spare a few minutes to participate in the survey, you’d be the first to receive a copy of the study when it is released in October 2014.  2014 Analytics Survey

Follow Radha on Twitter @rgowda

When Your Database Can’t Cut It, Your Business Suffers

larry By Larry Heathcote
Program Director, IBM Data Management

 

Your database is critical to your business. Applications depend on it. Business users depend on it. And when your database is not working well, your business suffers.

IBM DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

We’ve all heard the phrase “garbage in, garbage out,” and this is so true in today’s big data world. But it’s not just about good data; it’s also about the infrastructure that captures and delivers data to business applications and provides timely and actionable insights to those who need to understand, to make decisions, to act, to move the business forward.

 

It’s one thing to pull together a sandbox to examine new sources of data and write sophisticated algorithms that draw out useful insights. But it’s another matter to roll this out into production where Line of Business users depend on good data, reliable applications and insightful analytics. This is truly where the rubber meets the road – the production environment…and your database better be up to it.

Lenny Liebmann, InformationWeek Contributing Editor, and I recorded a webinar recently titled “Is Your Database Really Ready for Big Data.” And Lenny posted a blog talking about the role of DataOps in the modern data infrastructure. I’d like to extend this one more step and talk about the importance of your database in production. The best way I can do that is through some examples.

 

1: Speed of Deployment

ERP systems are vital to many companies for effective inventory management and efficient operations. It is important to make sure that these systems are well tuned, efficient and highly available, and when a change is needed that it be done quickly. Friedrich ran the SAP environment for a manufacturing company, and he was asked to improve the performance of applications that were used for inventory management and supply chain ops. More specifically, he needed to replace their production database with one that improved application performance but kept storage growth to a minimum. Knowing that time is money, his mission was to deploy the solution quickly, which he did… 3 hours up and running in a production environment with more than 80 percent data compression and 50x performance improvement. The business impact – inventory levels were optimized, operating costs were reduced and the supply chain became far more efficient.

 

2: Performance

Rajesh’s team needed to improve performance of an online sales portal that gave his company’s reps the ability to run sales and ERP reports from their tablets and mobile phones out in the field. Queries were taking 4-5 minutes to execute, and this simply was not acceptable – btw, impatience is a virtue for a sales rep. Rajesh found that the existing database was the bottleneck, so he replaced it. With less than 20 hours of work, it was up and running in production with a 96.5 percent reduction in query times. Can you guess the impact this had? Yep, sales volumes increased significantly, Rajesh’s team became heroes and the execs were happy. And, since reps were more productive, they were also more satisfied and rep turnover was reduced.

 

3: Reliability, Availability and Scalability

In today’s 24x7x365 world, transaction system downtime is just not an option. An insurance company was having issues with performance, availability, reliability and scalability needed to support the company’s rapid growth of insurance applications. Replacing their database not only increased application availability from 80 to 95 percent, but they also saw a dramatic improvement in data processing times even after a 4x growth in the number of concurrent jobs … and, decreased their total cost of ownership by 50 percent. The company also saw customer satisfaction and stickiness improve.

These significant results happened because these clients upgraded their core database to IBM DB2. DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

To learn more, watch our webinar.

Follow Larry on twtter at @larryheathcote

 

Join Larry and Lenny on a Tweet Chat on June 26 11 ET.  Join the conversation using #bigdatamgmt.  For the questions and more details see: http://bit.ly/Jun26TweetChat

A Dollar Saved Is Two Dollars Earned. Over A Million Dollars Saved Is?

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

A refreshing new feeling because DB2 can offer your business 57% improvement in compression, 60% improvement in processing times, and 30-60% reduction in transaction completion time.

Coca-Cola Bottling Co. Consolidated (CCBCC) was faced with severe business challenges: the rising cost of commodities and sharply higher fuel prices cannot be allowed to impact consumers of its world-famous sodas.  At the time of an SAP software refresh, the CCBCC IT team reviewed the company’s database strategy and discovered that migrating to IBM DB2 offered significant cost savings.  DB2 has delivered total operating cost reductions of more than $1 million over four years. And, DB2 10 has continued to be a compression workhorse, delivering another 20 % improvement in compression rate.

Staying competitive in a tough market

Andrew Juarez, Lead SAP Basis and DBA at CCBCC, notes:   “We happen to be in a market where we are considered an expendable item. In other words, it is not something that is mandatory. So we cannot push the price off to our customers to offset any losses that we may have, which means that we need to be very competitive on how we price our product.

Making the move to IBM DB2

Tom DeJuneas, IT Manager at CCBCC, states:   “We did a cost projection, looking at the cost of Oracle licenses and maintenance fees, and calculated that we could produce around $750,000 worth of savings over five years by switching to IBM DB2. We also undertook a proof-of-concept phase, which showed that IBM DB2 was able to offer the same, and potentially more, functionality as an Oracle system.”

Moving from Oracle has brought about a significant change in the IT organization’s strategy, as Andrew Juarez explains:   “When we were on Oracle, our philosophy was that we did not upgrade unless we were doing a major SAP upgrade. If the version was stable, then we stayed on it. Now, with IBM DB2 our strategy has completely changed, because with every new release our performance keeps getting better and better, and the value of the solution continues to grow.

Fast, accurate data

IBM DB2 manages key data from SAP® ERP modules such as financials, warehouse management, materials management and customer data.  Tom DeJuneas states, “Many of our background jobs and online dialog response times have improved considerably. For example, on the first night after we performed the switchover, one of our plant managers reported that jobs that normally took 90 minutes to run were running in just 30 minutes. This was simply by changing the database. So we had a massive performance increase in supply chain batch runs right from the get-go.

Impressive cost savings

IBM DB2 has helped CCBCC to make better use of its existing resources, delaying costly investment in new hardware and freeing up more money for investment in other projects.

Originally, when we did our business case for moving to IBM DB2, it was built around the savings on our Oracle licenses and maintenance, and that was it,” notes Andrew Juarez. “We did not factor in disk savings, so the fact that we are seeing additional savings around storage is icing on the cake. We had originally projected about $750,000 savings over five years and to date we are at four years and have seen a just over a million dollars in savings after migrating to IBM DB2. So we have bettered our original estimate by more than 25 percent.”

Tom DeJuneas concludes, “At CCBCC it is very important for us to stay on the frontline of innovation, and technology like IBM DB2 helps us to do that. Based on our experience, I do not see why anyone running SAP would use anything other than IBM DB2 as its database engine.”

Download CCBCC migrates to IBM DB2, saves more than $1 million for complete details.

For new insights to take your business to the next level and of course, cost savings, we invite you to try the DB2 with BLU Acceleration side of life.

Cost/Benefit Comparison of DB2 10.5 and Oracle Offerings

Danny Arnold

Danny Arnold ,  Worldwide Competitive Enablement Team

As part of the IBM Information Management team, I’m often asked to describe the advantages of DB2 10.5 over the Oracle offerings (Oracle Database 12c, Oracle Exadata, and other products). There are many reasons to choose DB2 10.5 over Oracle Database from a business value standpoint including licensing costs, technology advantages in the areas of compression, continuous availability with DB2 pureScale, and of course, the latest innovation, BLU Acceleration.  BLU Acceleration combines columnar, memory optimization, and other technologies to deliver fast analytic query results and excellent compression along with greatly reduced administration. However, from a client perspective, it is someone from IBM stating that DB2 10.5 delivers all of these things (and we are probably a little biased).

Therefore, it was nice to read the recently published report from International Technology Group (ITG)  Cost/Benefit Case for IBM DB2 10.5 for High Performance Analytics and Transaction Processing Compared to Oracle Platforms

This report describes both the high performance analytics and transactional processing application areas and highlights the advantages of DB2 10.5 over the Oracle offerings. There are a number of key pieces of information that this report brings to light including:

  • DB2 10.5 provides a lower total operating cost of ownership than Oracle
    • 28% to 34% lower 3-year TCO for transactional processing
    • 54% to 63% lower 3-year TCO for high performance analytic
  • Faster deployment with an average deployment time of 57 days LESS than an Oracle based solution
  • 5.3X better compression rates for DB2 over Oracle (12.6X for DB2 versus 7.3X for Oracle)

The ITG report provides details in the areas of technology differentiators for high performance analytics where Oracle Database does not use Massively Parallel Processing (MPP) or an in-memory RAM based approach like DB2 with BLU Acceleration or SAP HANA, but provides another level of processing within the Exadata Storage Servers. So there is no built-in capability within the Oracle Database to efficiently process analytics in a high performance manner. Instead, the client must purchase an Oracle Exadata engineered system to gain any performance advantages for an analytics workload.  ITG continues in their analysis of the Oracle Exadata system for analytics by stating:

The hybrid design has two important implications:
 1. The overall environment is complex – administrators, for example, must deal with partitioned Oracle databases, RAC   and Exadata-specific hardware and software features
2. Use of system resources is inefficient. High levels of system overhead are generated.
Exadata may be characterized as a “brute force” design. Because systems must compensate for overhead, the considerable processing power offered by this platform does not translate directly into application level performance.

 The ITG report wraps up its discussion of technology differentiators describing how DB2 with BLU Acceleration takes advantage of the multiple processor cores available in today’s Intel and Power environments along with the other BLU technology advantages to deliver an average of 31.6X better performance versus the smaller average performance gain of 5.5X experienced by Oracle Exadata clients.

During the ITG discussion of complexity between DB2 and Oracle within the report, the findings were even more favorable to DB2. Oracle Exadata administrators have to develop skills in system and storage to augment their Oracle Database DBA skills. The average Oracle Exadata administrative task breakdown is 60% database administration and 40% system and storage administration. In ITG discussions with clients, Oracle Exadata systems required an average of 0.8 FTE (full time equivalents) per Exadata system for administration versus the DB2 with BLU Acceleration average of 0.25 FTE. This complexity difference between the two environments was highlighted in the deployment time comparison, with an average deployment time of 38 days for DB2 with BLU Acceleration versus an average deployment time of 95 days for Oracle Exadata. An interesting side note to this deployment time difference is that most Oracle Exadata deployments (over 60%) were performed by clients that were already Oracle Database owners and experienced with the Oracle Database.

The ITG report covers packaging and pricing and provides many tables and graphs highlighting the differences between DB2 10.5 and Oracle. If you are interested in learning more details about DB2 10.5 and its cost benefits over Oracle, I urge you to download and read the ITG report.

AreYouSeeingRed-50

Whatever you do, stay uncompromised!

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

That is the new marketing campaign for all new Audi A3 – a luxury without compromise at an affordable price. How does Audi AG accomplish that? By creating super-efficient business processes for a start.

Audi AG, facing growing competition from global auto companies, teamed up with IBM and implemented server virtualization using IBM PowerVM® and migrated over 100 SAP systems from HP-UX with Oracle 10g database to IBM AIX 6.1 with IBM DB2 9.7. With the IBM private cloud ready infrastructure underpinning its SAP systems, Audi now has a robust, flexible, and high-performance platform for managing its business operations. As the company tackles increasing competition, the ability to expand and contract its SAP solutions in line with changing demands will help Audi to ensure that it has the right IT resources in place, at the right cost of ownership.

Need for a more sustainable infrastructure

With over eight production plants across the world and increasing competition, the IT Services Department at Audi needed to help the company address multiple business challenges – increasing demands from employees, customers and suppliers, variable sales volumes, rising cost pressures, and the need for new technologies. They envisioned a flexible virtualized infrastructure that is more flexible and sustainable, and equally important was the availability and performance of over 100 business-critical SAP systems.

“IBM created an environment that enabled us to compare DB2 9.7 with a reorganized Oracle 10g database, both running on IBM Power Systems servers. The results were clear: storage savings in the range of 50 to 70 percent, and much higher performance when using DB2.”
– Markus Wierl,Service Owner of SAP Infrastructure AUDI AG

Cloud for flexibility and speed

Teaming with IBM, Audi implemented server virtualization using IBM PowerVM® and migrated over 100 SAP systems, including more than 30 SAP landscapes and 26 high-availability clusters, from HP-UX with Oracle 10g database to IBM AIX 6.1 with IBM DB2 9.7. The new SAP landscape runs on dual-data-center, symmetrically implemented ‘private cloud ready’ infrastructure, with the infrastructure hosted by Audi and managed by IBM. Virtualization enabled Audi to pack a large number of separate business systems onto a small number of physical servers, pushing up utilization and eliminating costly unused capacity. Rather than having each logical system tied to a particular physical server, and only able to expand through the physical addition of new hardware, Audi can now reallocate resources on the fly from one system to another as required, and respond faster and more cost-effectively to new business requirements.

This really was an impressive accomplishment by the IBM team: migrating more than 100 SAP systems to a completely new operating system and database in six months and with no disruption.” — Markus Wierl

Supporting business excellence

We trust the IBM infrastructure to run our production systems, which are absolutely business-critical,” says Markus Wierl “Any significant unplanned downtime could lead to a stoppage on our production lines. Modern automotive manufacturing is based on just-in-time concepts, and involves a large and complex partner ecosystem. So any minor disruption to production can rapidly turn into a major problem for multiple parties. For this reason, we highly value the robustness and availability of the IBM Power Systems and BladeCenter technology for our SAP solutions.

Download how Audi gears up for continued success with IBM private cloud for complete details.

Migration is a factor in natural selection. Stay uncompromised as you evolve your data management systems.

Follow Radha on Twitter @rgowda

Simplifying Oracle Database Migrations

Danny Arnold

Danny Arnold ,  Worldwide Competitive Enablement Team

As part of my role in IBM Information Management, as a technical advocate for our DB2 for LUW(Linux , Unix, Windows) product set, I often enter into discussions with clients that are currently using Oracle Database.

With the unique technologies delivered in the DB2 10 releases (10.1 and 10.5), such as

  • temporal tables to allow queries against data at a specific point-in-time,
  • row and column access control (RCAC) to provide granular row and column level security that extends the traditional RDBMS table privileges for additional data security, pureScale for near continuous availability database clusters,
  • database partitioning feature (DPF) for parallel query processing against large data sets (100s of TBs), and
  • the revolutionary new BLU Acceleration technology to allow analytic workloads to use column-organized tables to deliver performance orders of magnitude faster than conventional row-organized tables,

many clients like the capabilities and technology that DB2 for LUW provides.

However, a key concern is the level of effort to migrate an existing Oracle Database environment to DB2 .  Although DB2  provides Oracle compatibility and has had this capability built into the database engine since the DB2 9.7 release, there is still confusion on the part of clients as to what this Oracle compatibility means in terms of a migration effort.  Today, DB2 provides a native Oracle PL/SQL procedural language compiler, support for Oracle specific ANSII SQL language extensions, Oracle SQL functions, and Oracle specific data types (such as NUMBER and VARCHAR2).  This compatibility layer within DB2 allows many Oracle Database environments to be migrated to DB2 with minimal effort. Many stored procedures and application SQL that are used against Oracle Database can run unchanged against DB2 reducing both the migration effort and migration risk, as the application did not have to be modified. So the testing phase is much less effort than for a changed or heavily modified application SQL and stored procedures. Although the migration effort seems relatively straight forward, there are still questions that come up with clients and there is the need for a clear explanation of the Oracle Database to DB2 migration process.

Recently, a new solution brief entitled “Simplify your Oracle database migrations” published by IBM Data Management , provides a clear explanation of how DB2 and the PureData for Transactions appliance built upon DB2 pureScale can deliver a clustered database environment for migrating an Oracle database to DB2.  This brief provides a clear and concise overview of what an Oracle to DB2 migration requires and the assistance and tooling available from IBM to make a migration straightforward for a client’s environment.  The brief provides a concise description of the IBM tooling, IBM Database Conversion Workbench, which is available to assist a client in moving their tables, stored procedures, and data from Oracle to DB2.

The fact that DB2 for LUW makes migrating from Oracle a task that takes minimal effort, due to the Oracle compatibility built into DB2, is complemented by the PureData for Transactions system. PureData for Transactions provides an integrated, pre-built DB2 pureScale environment that allows a pureScale instance and a DB2 clustered database to be ready for use in a matter of hours. This helps simplify the implementation and configuration experience for the client. Combining the ease of Oracle migration to DB2 with the rapid implementation and configuration possible with PureData for Transactions, provides a winning combination for a client looking for a more cost effective and available alternative to the Oracle Database.

Follow

Get every new post delivered to your Inbox.

Join 35 other followers