When Your Database Can’t Cut It, Your Business Suffers

larry

 Larry Heathcote , Program Director, IBM Data Management

Your database is critical to your business. Applications depend on it. Business users depend on it. And when your database is not working well, your business suffers.

IBM DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

We’ve all heard the phrase “garbage in, garbage out,” and this is so true in today’s big data world. But it’s not just about good data; it’s also about the infrastructure that captures and delivers data to business applications and provides timely and actionable insights to those who need to understand, to make decisions, to act, to move the business forward.

 

It’s one thing to pull together a sandbox to examine new sources of data and write sophisticated algorithms that draw out useful insights. But it’s another matter to roll this out into production where Line of Business users depend on good data, reliable applications and insightful analytics. This is truly where the rubber meets the road – the production environment…and your database better be up to it.

Lenny Liebmann, InformationWeek Contributing Editor, and I recorded a webinar recently titled “Is Your Database Really Ready for Big Data.” And Lenny posted a blog talking about the role of DataOps in the modern data infrastructure. I’d like to extend this one more step and talk about the importance of your database in production. The best way I can do that is through some examples.

 

1: Speed of Deployment

ERP systems are vital to many companies for effective inventory management and efficient operations. It is important to make sure that these systems are well tuned, efficient and highly available, and when a change is needed that it be done quickly. Friedrich ran the SAP environment for a manufacturing company, and he was asked to improve the performance of applications that were used for inventory management and supply chain ops. More specifically, he needed to replace their production database with one that improved application performance but kept storage growth to a minimum. Knowing that time is money, his mission was to deploy the solution quickly, which he did… 3 hours up and running in a production environment with more than 80 percent data compression and 50x performance improvement. The business impact – inventory levels were optimized, operating costs were reduced and the supply chain became far more efficient.

 

2: Performance

Rajesh’s team needed to improve performance of an online sales portal that gave his company’s reps the ability to run sales and ERP reports from their tablets and mobile phones out in the field. Queries were taking 4-5 minutes to execute, and this simply was not acceptable – btw, impatience is a virtue for a sales rep. Rajesh found that the existing database was the bottleneck, so he replaced it. With less than 20 hours of work, it was up and running in production with a 96.5 percent reduction in query times. Can you guess the impact this had? Yep, sales volumes increased significantly, Rajesh’s team became heroes and the execs were happy. And, since reps were more productive, they were also more satisfied and rep turnover was reduced.

 

3: Reliability, Availability and Scalability

In today’s 24x7x365 world, transaction system downtime is just not an option. An insurance company was having issues with performance, availability, reliability and scalability needed to support the company’s rapid growth of insurance applications. Replacing their database not only increased application availability from 80 to 95 percent, but they also saw a dramatic improvement in data processing times even after a 4x growth in the number of concurrent jobs … and, decreased their total cost of ownership by 50 percent. The company also saw customer satisfaction and stickiness improve.

These significant results happened because these clients upgraded their core database to IBM DB2. DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

To learn more, watch our webinar.

Follow Larry on twtter at @larryheathcote

 

Join Larry and Lenny on a Tweet Chat on June 26 11 ET.  Join the conversation using #bigdatamgmt.  For the questions and more details see: http://bit.ly/Jun26TweetChat

A Dollar Saved Is Two Dollars Earned. Over A Million Dollars Saved Is?

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

A refreshing new feeling because DB2 can offer your business 57% improvement in compression, 60% improvement in processing times, and 30-60% reduction in transaction completion time.

Coca-Cola Bottling Co. Consolidated (CCBCC) was faced with severe business challenges: the rising cost of commodities and sharply higher fuel prices cannot be allowed to impact consumers of its world-famous sodas.  At the time of an SAP software refresh, the CCBCC IT team reviewed the company’s database strategy and discovered that migrating to IBM DB2 offered significant cost savings.  DB2 has delivered total operating cost reductions of more than $1 million over four years. And, DB2 10 has continued to be a compression workhorse, delivering another 20 % improvement in compression rate.

Staying competitive in a tough market

Andrew Juarez, Lead SAP Basis and DBA at CCBCC, notes:   “We happen to be in a market where we are considered an expendable item. In other words, it is not something that is mandatory. So we cannot push the price off to our customers to offset any losses that we may have, which means that we need to be very competitive on how we price our product.

Making the move to IBM DB2

Tom DeJuneas, IT Manager at CCBCC, states:   “We did a cost projection, looking at the cost of Oracle licenses and maintenance fees, and calculated that we could produce around $750,000 worth of savings over five years by switching to IBM DB2. We also undertook a proof-of-concept phase, which showed that IBM DB2 was able to offer the same, and potentially more, functionality as an Oracle system.”

Moving from Oracle has brought about a significant change in the IT organization’s strategy, as Andrew Juarez explains:   “When we were on Oracle, our philosophy was that we did not upgrade unless we were doing a major SAP upgrade. If the version was stable, then we stayed on it. Now, with IBM DB2 our strategy has completely changed, because with every new release our performance keeps getting better and better, and the value of the solution continues to grow.

Fast, accurate data

IBM DB2 manages key data from SAP® ERP modules such as financials, warehouse management, materials management and customer data.  Tom DeJuneas states, “Many of our background jobs and online dialog response times have improved considerably. For example, on the first night after we performed the switchover, one of our plant managers reported that jobs that normally took 90 minutes to run were running in just 30 minutes. This was simply by changing the database. So we had a massive performance increase in supply chain batch runs right from the get-go.

Impressive cost savings

IBM DB2 has helped CCBCC to make better use of its existing resources, delaying costly investment in new hardware and freeing up more money for investment in other projects.

Originally, when we did our business case for moving to IBM DB2, it was built around the savings on our Oracle licenses and maintenance, and that was it,” notes Andrew Juarez. “We did not factor in disk savings, so the fact that we are seeing additional savings around storage is icing on the cake. We had originally projected about $750,000 savings over five years and to date we are at four years and have seen a just over a million dollars in savings after migrating to IBM DB2. So we have bettered our original estimate by more than 25 percent.”

Tom DeJuneas concludes, “At CCBCC it is very important for us to stay on the frontline of innovation, and technology like IBM DB2 helps us to do that. Based on our experience, I do not see why anyone running SAP would use anything other than IBM DB2 as its database engine.”

Download CCBCC migrates to IBM DB2, saves more than $1 million for complete details.

For new insights to take your business to the next level and of course, cost savings, we invite you to try the DB2 with BLU Acceleration side of life.

Cost/Benefit Comparison of DB2 10.5 and Oracle Offerings

Danny Arnold

Danny Arnold ,  Worldwide Competitive Enablement Team

As part of the IBM Information Management team, I’m often asked to describe the advantages of DB2 10.5 over the Oracle offerings (Oracle Database 12c, Oracle Exadata, and other products). There are many reasons to choose DB2 10.5 over Oracle Database from a business value standpoint including licensing costs, technology advantages in the areas of compression, continuous availability with DB2 pureScale, and of course, the latest innovation, BLU Acceleration.  BLU Acceleration combines columnar, memory optimization, and other technologies to deliver fast analytic query results and excellent compression along with greatly reduced administration. However, from a client perspective, it is someone from IBM stating that DB2 10.5 delivers all of these things (and we are probably a little biased).

Therefore, it was nice to read the recently published report from International Technology Group (ITG)  Cost/Benefit Case for IBM DB2 10.5 for High Performance Analytics and Transaction Processing Compared to Oracle Platforms

This report describes both the high performance analytics and transactional processing application areas and highlights the advantages of DB2 10.5 over the Oracle offerings. There are a number of key pieces of information that this report brings to light including:

  • DB2 10.5 provides a lower total operating cost of ownership than Oracle
    • 28% to 34% lower 3-year TCO for transactional processing
    • 54% to 63% lower 3-year TCO for high performance analytic
  • Faster deployment with an average deployment time of 57 days LESS than an Oracle based solution
  • 5.3X better compression rates for DB2 over Oracle (12.6X for DB2 versus 7.3X for Oracle)

The ITG report provides details in the areas of technology differentiators for high performance analytics where Oracle Database does not use Massively Parallel Processing (MPP) or an in-memory RAM based approach like DB2 with BLU Acceleration or SAP HANA, but provides another level of processing within the Exadata Storage Servers. So there is no built-in capability within the Oracle Database to efficiently process analytics in a high performance manner. Instead, the client must purchase an Oracle Exadata engineered system to gain any performance advantages for an analytics workload.  ITG continues in their analysis of the Oracle Exadata system for analytics by stating:

The hybrid design has two important implications:
 1. The overall environment is complex – administrators, for example, must deal with partitioned Oracle databases, RAC   and Exadata-specific hardware and software features
2. Use of system resources is inefficient. High levels of system overhead are generated.
Exadata may be characterized as a “brute force” design. Because systems must compensate for overhead, the considerable processing power offered by this platform does not translate directly into application level performance.

 The ITG report wraps up its discussion of technology differentiators describing how DB2 with BLU Acceleration takes advantage of the multiple processor cores available in today’s Intel and Power environments along with the other BLU technology advantages to deliver an average of 31.6X better performance versus the smaller average performance gain of 5.5X experienced by Oracle Exadata clients.

During the ITG discussion of complexity between DB2 and Oracle within the report, the findings were even more favorable to DB2. Oracle Exadata administrators have to develop skills in system and storage to augment their Oracle Database DBA skills. The average Oracle Exadata administrative task breakdown is 60% database administration and 40% system and storage administration. In ITG discussions with clients, Oracle Exadata systems required an average of 0.8 FTE (full time equivalents) per Exadata system for administration versus the DB2 with BLU Acceleration average of 0.25 FTE. This complexity difference between the two environments was highlighted in the deployment time comparison, with an average deployment time of 38 days for DB2 with BLU Acceleration versus an average deployment time of 95 days for Oracle Exadata. An interesting side note to this deployment time difference is that most Oracle Exadata deployments (over 60%) were performed by clients that were already Oracle Database owners and experienced with the Oracle Database.

The ITG report covers packaging and pricing and provides many tables and graphs highlighting the differences between DB2 10.5 and Oracle. If you are interested in learning more details about DB2 10.5 and its cost benefits over Oracle, I urge you to download and read the ITG report.

AreYouSeeingRed-50

Whatever you do, stay uncompromised!

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

That is the new marketing campaign for all new Audi A3 – a luxury without compromise at an affordable price. How does Audi AG accomplish that? By creating super-efficient business processes for a start.

Audi AG, facing growing competition from global auto companies, teamed up with IBM and implemented server virtualization using IBM PowerVM® and migrated over 100 SAP systems from HP-UX with Oracle 10g database to IBM AIX 6.1 with IBM DB2 9.7. With the IBM private cloud ready infrastructure underpinning its SAP systems, Audi now has a robust, flexible, and high-performance platform for managing its business operations. As the company tackles increasing competition, the ability to expand and contract its SAP solutions in line with changing demands will help Audi to ensure that it has the right IT resources in place, at the right cost of ownership.

Need for a more sustainable infrastructure

With over eight production plants across the world and increasing competition, the IT Services Department at Audi needed to help the company address multiple business challenges – increasing demands from employees, customers and suppliers, variable sales volumes, rising cost pressures, and the need for new technologies. They envisioned a flexible virtualized infrastructure that is more flexible and sustainable, and equally important was the availability and performance of over 100 business-critical SAP systems.

“IBM created an environment that enabled us to compare DB2 9.7 with a reorganized Oracle 10g database, both running on IBM Power Systems servers. The results were clear: storage savings in the range of 50 to 70 percent, and much higher performance when using DB2.”
– Markus Wierl,Service Owner of SAP Infrastructure AUDI AG

Cloud for flexibility and speed

Teaming with IBM, Audi implemented server virtualization using IBM PowerVM® and migrated over 100 SAP systems, including more than 30 SAP landscapes and 26 high-availability clusters, from HP-UX with Oracle 10g database to IBM AIX 6.1 with IBM DB2 9.7. The new SAP landscape runs on dual-data-center, symmetrically implemented ‘private cloud ready’ infrastructure, with the infrastructure hosted by Audi and managed by IBM. Virtualization enabled Audi to pack a large number of separate business systems onto a small number of physical servers, pushing up utilization and eliminating costly unused capacity. Rather than having each logical system tied to a particular physical server, and only able to expand through the physical addition of new hardware, Audi can now reallocate resources on the fly from one system to another as required, and respond faster and more cost-effectively to new business requirements.

This really was an impressive accomplishment by the IBM team: migrating more than 100 SAP systems to a completely new operating system and database in six months and with no disruption.” — Markus Wierl

Supporting business excellence

We trust the IBM infrastructure to run our production systems, which are absolutely business-critical,” says Markus Wierl “Any significant unplanned downtime could lead to a stoppage on our production lines. Modern automotive manufacturing is based on just-in-time concepts, and involves a large and complex partner ecosystem. So any minor disruption to production can rapidly turn into a major problem for multiple parties. For this reason, we highly value the robustness and availability of the IBM Power Systems and BladeCenter technology for our SAP solutions.

Download how Audi gears up for continued success with IBM private cloud for complete details.

Migration is a factor in natural selection. Stay uncompromised as you evolve your data management systems.

Follow Radha on Twitter @rgowda

Simplifying Oracle Database Migrations

Danny Arnold

Danny Arnold ,  Worldwide Competitive Enablement Team

As part of my role in IBM Information Management, as a technical advocate for our DB2 for LUW(Linux , Unix, Windows) product set, I often enter into discussions with clients that are currently using Oracle Database.

With the unique technologies delivered in the DB2 10 releases (10.1 and 10.5), such as

  • temporal tables to allow queries against data at a specific point-in-time,
  • row and column access control (RCAC) to provide granular row and column level security that extends the traditional RDBMS table privileges for additional data security, pureScale for near continuous availability database clusters,
  • database partitioning feature (DPF) for parallel query processing against large data sets (100s of TBs), and
  • the revolutionary new BLU Acceleration technology to allow analytic workloads to use column-organized tables to deliver performance orders of magnitude faster than conventional row-organized tables,

many clients like the capabilities and technology that DB2 for LUW provides.

However, a key concern is the level of effort to migrate an existing Oracle Database environment to DB2 .  Although DB2  provides Oracle compatibility and has had this capability built into the database engine since the DB2 9.7 release, there is still confusion on the part of clients as to what this Oracle compatibility means in terms of a migration effort.  Today, DB2 provides a native Oracle PL/SQL procedural language compiler, support for Oracle specific ANSII SQL language extensions, Oracle SQL functions, and Oracle specific data types (such as NUMBER and VARCHAR2).  This compatibility layer within DB2 allows many Oracle Database environments to be migrated to DB2 with minimal effort. Many stored procedures and application SQL that are used against Oracle Database can run unchanged against DB2 reducing both the migration effort and migration risk, as the application did not have to be modified. So the testing phase is much less effort than for a changed or heavily modified application SQL and stored procedures. Although the migration effort seems relatively straight forward, there are still questions that come up with clients and there is the need for a clear explanation of the Oracle Database to DB2 migration process.

Recently, a new solution brief entitled “Simplify your Oracle database migrations” published by IBM Data Management , provides a clear explanation of how DB2 and the PureData for Transactions appliance built upon DB2 pureScale can deliver a clustered database environment for migrating an Oracle database to DB2.  This brief provides a clear and concise overview of what an Oracle to DB2 migration requires and the assistance and tooling available from IBM to make a migration straightforward for a client’s environment.  The brief provides a concise description of the IBM tooling, IBM Database Conversion Workbench, which is available to assist a client in moving their tables, stored procedures, and data from Oracle to DB2.

The fact that DB2 for LUW makes migrating from Oracle a task that takes minimal effort, due to the Oracle compatibility built into DB2, is complemented by the PureData for Transactions system. PureData for Transactions provides an integrated, pre-built DB2 pureScale environment that allows a pureScale instance and a DB2 clustered database to be ready for use in a matter of hours. This helps simplify the implementation and configuration experience for the client. Combining the ease of Oracle migration to DB2 with the rapid implementation and configuration possible with PureData for Transactions, provides a winning combination for a client looking for a more cost effective and available alternative to the Oracle Database.

Fraud detection? Not so elementary, my dear! (Part 2)

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

The first part of this blog gave an overview of IBM Watson Foundation portfolio and DB2 solutions for financial fraud detection.  In this part, we’ll go over DB2 Warehouse features that help detect fraud in near-real-time.

Figure 1: DB2 warehouse for operational analytics

DatawarehouseOA

Data warehouses integrate data from one or more disparate sources to provide a single view of the business and have that single repository available to all levels of the business for analysis. To support today’s workloads, the data warehouse architecture must optimize both traditional deep analytic queries and shorter transactional type queries. It must be able to scale out under data explosion without compromising on either performance or storage. And, it must have the capacity to load and update data in real-time.  DB2 for Linux, UNIX and Windows offers you all these capabilities and more to help you build a scalable and high performing warehouse for near real-time fraud detection.

DB2 warehouse components are organized into six major categories as shown in Figure 2.  We shall discuss only the highlighted ones that help make near-real-time fraud detection a reality.

Figure 2: Warehouse components available in DB2 advanced editions

warehousecomps

As we discussed before, fraud detection is knowledge intensive. It involves sifting through vast amount of data to identify and verify patterns, and construct fraud models to help with real-time detection of fraudulent activity.

Embedded Analytics
DB2 offers embedded analytics, in the form of OLAP and data mining.

Data Mining enables you to analyze patterns and make predictions. Unlike solutions that require end users to extract data from the warehouse, independently analyze it and then send the results back to the warehouse, DB2 provides embedded data mining, modeling, and scoring capabilities.

Modeling– the process starts with historical data being gathered and put through a series of mathematical functions to classify, cluster and segment the data. It automatically finds associations and business rules in the data, which may signify interesting patterns (imagine customers’ credit card purchasing patterns). The business rules are then collected together into a model, which can have a few or tens of thousands of rules.

Visualization helps analysts evaluate the business rules to make sure that they are accurate.

Scoring involves applying the verified business rules to current data to help predict transactions that are likely to be fraudulent in real time.

For example, consider credit card spending patterns outside the norm. While outlier rules (detecting deviations in large data sets) can be applied to a banking transaction when it enters the system to help predict whether it is fraudulent, outlier handling is not usually automatic.  An expert needs to take a closer look to decide whether to take action or not.  This is where Cognos comes to help – to generate reports to visualize the outliers so a human expert can understand the nature of an outlier.

DB2 supports standard data mining model algorithms such as clustering, associations, classification and prediction; additional algorithms may be imported in industry-standard Predictive Model Markup Language (PMML) format from other PMML-compliant data mining applications including SAS and SPSS. This capability enables high-volume, high-speed, parallelized scoring of data in DB2 using third-party models.

Cubing Services provide decision makers a multidimensional view of data stored in a relational database. It supports OLAP capabilities within the data warehouse and simplifies queries that run against large and complex data stores. The multidimensional view of data leads to easier discovery and understanding of the relationships in your data for better business decisions. In addition, Cubing Services cubes are first-class data providers to the CognosBusiness Intelligence platform for incorporating predictive and analytic insights into Cognos reports.

Unstructured Data – up to 80 percent of the data within an organization is unstructured. DB2 can extract information from your unstructured business text and correlate it with your structured data to increase business insight into customer issues. DB2 also allows you to process unstructured data and create multidimensional reports using OLAP capabilities.  In addition, unstructured data can be integrated into data mining models to broaden predictive capabilities.

DB2 Spatial Extender allows you to store, manage, and analyze spatial data in DB2, which along with business data in a data warehouse helps with fraud analysis.

Temporal Data helps you implement time-based queries quickly and easily. Historical trend analysis and point-in-time queries can be constructed by using the history tables and SQL period specifications that are part of the database engine.

Performance Optimization

Database Partitioning Feature (DPF) for row based data store– As data volume increases over time, the data might become skewed and fragmented, resulting in decreased performance. DPFdistributes table data across multiple database partitions in a shared-nothing manner in which each database partition “owns” a subset of the data. It enables massive parallel processing by transparently splitting the database across multiple partitions and using the power of multiple servers to satisfy requests for large amounts of information.  This architecture allows databases to grow very large to support true enterprise data warehouses.

Data Movement and Transformation

Continuous Data Ingest (CDI) allows business-critical data to be continually loaded into the warehouse without the latency associated with periodic batch loading. It allows the warehouse to reflect the most up-to-date information and can help you make timely and accurate decisions. Consider for example, receiving a lost credit card log, a potential credit card fraud alert, from the call center. Such an event is ingested into the warehouse immediately rather than wait until a batch load occurs on pre-defined intervals.  Using such contextual information along with account transaction data can help in real-time fraud detection.

In fact, after experiencing just how beneficial CDI feature is, some of our clients have renamed their Extract, Transform, and Load (ETL) processes to Extract, Transform, and Ingest (ETI).

All these features are available in DB2 advanced editions and IBM PureData System for Operational Analytics to help you deliver near-real-time insights.

Now, are you meeting the service level agreements for performance while trying to prevent fraud in real time? Not sure? Why don’t you give DB2 with BLU Acceleration or other IBM Data Management solutions a try?  Perhaps IBM Data Management Solutions can help you achieve your business objectives.

Yes, fraud detection is not so elementary. But with the right clues, I mean with the right software and tools, it could be made elementary.

Follow Radha on Twitter @rgowda

Read the IBM Data Management for Banking whitepaper for more information on how IBM can help banks gain a competitive edge!

Achieving High Availability with PureData System for Transactions

KellySchlamb

Kelly Schlamb , DB2 pureScale and PureData Systems Specialist, IBM

A short time ago, I wrote about improving IT productivity with IBM PureData System for Transactions and I mentioned a couple of new white papers and solution briefs on that topic.  Today, I’d like to highlight another one of these new papers: Achieving high availability with PureData System for Transactions.

I’ve recently been meeting with a lot of different companies and organizations to talk about DB2 pureScale and PureData System for Transactions, and while there’s a lot of interest and discussion around performance and scalability, the primary reason that I’m usually there is to talk about high availability and how they can achieve higher levels than what they’re seeing today. One thing I’m finding is that there are a lot of different interpretations of what high availability means (and I’m not going to argue here over what the correct definition is). To some, it’s simply a matter of what happens when some sort of localized unplanned outage occurs, like a failure of their production server or a component of that server. How can downtime be minimized in that case?  Others extend this discussion out to include planned outages, such as maintenance operations or adding more capacity into the system. And others will include disaster recovery under the high availability umbrella as well (while many keep them as distinctly separate topics — but that’s just semantics). It’s not enough that they’re protected in the event of some sort of hardware component failure for their production system, but what would happen if the entire data center was to experience an outage? Finally (and I don’t mean to imply that this is an exhaustive list — when it comes to keeping the business available and running, there may be other things that come into the equation as well), availability could also include a discussion on performance. There is typically an expectation of performance and response time associated with transactions, especially those that are being executed on behalf of customers, users, and business processes. If a customer clicks on button on a website and it doesn’t come back quickly, it may not be distinguishable from an outage and the customer may leave that site, choosing to go to a competitor instead.

It should be pointed out that not every database requires the highest levels of availability. It might not be a big deal to an organization if a particular departmental database is offline for 20 minutes, or an hour, or even the entire day. But there are certainly some business-critical databases that are considered “tier 1″ that do require the highest availability possible. Therefore, it is important to understand the availability requirements that your organization has.  But I’m likely already preaching to the choir here and you’re reading this because you do have a need and you understand the ramifications to your business if these needs aren’t met. With respect to the companies I’ve been meeting with, just hearing about what kinds of systems they depend on from both an internal and external perspective- and what it means to them if there’s an interruption in service- has been fascinating.  Of course, I’m sympathetic to their plight, but as a consumer and a user I still have very high expectations around service. I get pretty mad when I can’t make an online trade, check the status of my travel reward accounts, or even order a pizza online ; especially when I know what those companies could be doing to provide better availability to their users.  :-)

Those things I mentioned above — high availability, disaster recovery, and performance (through autonomics) — are all discussed as part of the paper in the context of PureData System for Transactions. PureData System for Transactions is a reliable and resilient expert integrated system designed for high availability, high throughput online transaction processing (OLTP). It has built-in redundancies to continue operating in the event of a component failure, disaster recovery capabilities to handle complete system unavailability, and autonomic features to dynamically manage utilization and performance of the system. Redundancies include power, compute nodes, storage, and networking (including the switches and adapters). In the case of a component failure, a redundant component keeps the system available. And if there is some sort of data center outage (planned or unplanned), a standby system at another site can take over for the downed system. This can be accomplished via DB2′s HADR feature (remember that DB2 pureScale is the database environment within the system) or through replication technology such as Q Replication or Change Data Capture (CDC), part of IBM InfoSphere Data Replication (IIDR).

Just a reminder that the IDUG North America 2014 conference will be taking place in Phoenix next month from May 12-16. Being in a city that just got snowed on this morning, I’m very much looking forward to some hot weather for a change. Various DB2, pureScale, and PureData topics are on the agenda. And since I’m not above giving myself a shameless plug, come by and see me at my session: A DB2 DBA’s Guide to pureScale (session G05). Click here for more details on the conference. Also, check out Melanie Stopfer’s article on IDUG.  Hope to see you there!

Fraud detection? Not so elementary, my dear.

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

Did you know that fraud and financial crime has been estimated at over $3.5 trillion annually1?  Identity theft alone cost Americans over $24 billion i.e. $10 billion more than all other property crimes2?  And, 70% of all companies have experienced some type of fraud3?

While monetary loss due to fraud is significant, the loss of reputation and trust can be even more devastating.  In fact, according to a 2011 study by Ponemon Institute, organizations lose an average of $332 million in brand value in the year following a data breach. Unfortunately, fraud continues to accelerate due to advances in technology, organizational silos, lower risks of getting caught, weak penalties, and economic conditions.  In this era of big data, fraud detection needs to go beyond traditional data sources i.e. not just transaction and application data, but also machine, social, and geospatial data for greater correlation and actionable insights. The only way you can sift through vast amount of structured and unstructured data and keep up with the evolving complexity of fraud is through smarter application of analytics to identify patterns,construct fraud models, and conduct real-time detection of fraudulent activity.

 IBM Watson Foundation portfolio for end-to-end big data and analytics needs

watson

While IBM has an impressive array of offerings addressing all your big data and analytical needs, our focus here is on how DB2 solutions can help you develop and test fraud models, score customers for fraud risk, and conduct rapid, near-real-time analytics to detect potential fraud.  You have the flexibility to choose the type of solution that best fits your needs – select software solutions to take advantage of your existing infrastructure or choose expert-integrated appliance-based solutions for simplified experience and fast time to value.

Highly available and scalable operational systems for reliable transaction data

DB2 for Linux, UNIX and Windows software is optimized to deliver industry-leading performance across multiple workloads – transactional, analytic and operational analytic – while lowering administration, storage, development, and server costs.  DB2 pureScale, with its cluster based, shared disk architecture, provides application transparent scalability beyond 100 nodes, helps achieve failover between two nodes in seconds, and offers business continuity with built-in disaster recovery over distances of a thousand kilometers.

IBM PureData System for Transactions, powered by DB2, is an expert integrated server, storage, network, and tools selected and tuned specifically for the demands of high-availability , high-throughput transactional processing—so you do not have to research, purchase, install, configure and tune the different pieces to work together. With its pre-configured topology and database patterns, you can set up high availability cluster instances and database nodes to meet your specific needs and deploy the same day rather than spend weeks or months. As your business grows, you can add new databases in minutes and manage the whole system using its intuitive system management console.

Analytics for fraud detection

 DB2 Warehouse Analytics  DB2 advanced editions offer capabilities such as online analytical processing (OLAP), continuous data ingest, data mining, and text analytics that are well-suited for real-time enterprise analytics and can help you extract structured information out of previously untapped business text.  Its business value in enabling fraud detection is immense.

IBM PureData System for Operational Analytics, powered by DB2, helps you deliver near-real-time insights with continuous data ingest and immediate data analysis.  It is reliable, scalable, and optimized to handle 1,000s of concurrent operational queries with outstanding performance. You can apply fraud models to identify suspicious transactions while they are in progress, not hours later. This can apply across any industry segment, including financial services, health care, insurance, retail, manufacturing, and government services.  PureData System for Operational Analytics helps with not just real-time fraud detection, but also cross-sell or up-sell offers/services identifying customer preferences, anticipating their behavior, and predicting the optimum offer/server in real-time.

DB2 with BLU Acceleration, available in advanced DB2 editions, uses advanced in-memory columnar technologies to help you analyze data and generate new insights in seconds instead of days.  It can provide performance improvements ranging from 10x to 25x and beyond, with some queries achieving 1,000 times improvement4,  for analytical queries with minimal tuning.  DB2 with BLU Acceleration is extremely simple to deploy and provides good out-of-the-box performance for analytic workloads. From a DBA’s perspective, you simply create table, load and go. There are no secondary objects, such as indexes or MQTs that need to be created to improve query performance.

DB2 with BLU Acceleration can handle terabytes of data to help you conduct customer scoring across your entire customer data set, develop and test fraud models that explore a full range of variables based on all available data.  Sometimes creating a fraud model may involve looking at 100s of terabytes of data, where IBM® PureData™ System for Analytics would fare better.  Once a fraud model is created, you can use DB2 with BLU Acceleration to apply fraud model to every transaction that comes in for speed of thought insight.

IBM Cognos® BI  DB2 advanced editions come with 5 user licenses for Cognos BI, which enable users to access and analyze the information consumers need to make the decisions that lead to better business outcomes.  Cognos BI with Dynamic Cubes, in-memory accelerator for dimensional analysis,enables high-speed interactive analysis and reporting over terabytes of data.  DB2 with BLU acceleration integrated with Cognos BI with Dynamic Cubes offers you a fast-on-fast performance for all your BI needs.

With the array of critical challenges facing financial institutions today, smarter are the ones that successfully protect their core asset – data. IBM data management solutions help you integrate information, generate new insights to detect and mitigate fraud. We invite you to explore and experience DB2 and the rest of Watson foundation offerings made with IBM.

Stay tuned for the second part of this blog that will explore the product features in detail.

1 ACFE 2012 report to the nations
2 BJS 2013 report on identity theft
3Kroll 2013/2014 global fraud report

4 Based on internal IBM tests of analytic workloads comparing queries accessing row-based tables on DB2 10.1 vs. columnar tables on DB2 10.5. Results not typical. Individual results will vary depending on individual workloads, configurations and conditions, including size and content of the table, and number of elements being queried from a given table.

Follow Radha on Twitter @rgowda

 

Improve IT Productivity with IBM PureData System for Transactions

KellySchlamb

Kelly Schlamb , DB2 pureScale and PureData Systems Specialist, IBM
I’m a command line kind of guy, always have been. When I’m loading a presentation or a spreadsheet on my laptop, I don’t open the application or the file explorer and work my way through it to find the file in question and double click the icon to open it. Instead, I open a command line window (one of the few icons on my desktop), navigate to the directory I know where the file is (or will do a command line file search to find it) and I’ll execute/open the file directly from there. When up in front of a crowd, I can see the occasional look of wonder at that, and while I’d like to think it’s them thinking “wow, he’s really going deep there… very impressive skills”, in reality it’s probably more like “what is this caveman thinking… doesn’t he know there are easier, more intuitive ways of accomplishing that?!?”

The same goes for managing and monitoring the systems I’ve been responsible for in the past. Where possible, I’ve used command line interfaces, I’ve written scripts, and I’ve visually pored through raw data to investigate problems. But inevitably I’d end up doing something wrong, like miss a step, do something out of order, or miss some important output - leaving things not working or not performing as expected. Over the years, I’ve considered that part of the fun and challenge of the job. How do I fix this problem? But nowadays, I don’t find it so fun. In fact, I find it extremely frustrating.Things have gotten more complex and there are more demands on my time. I have much more important things to do than figure out why the latest piece of software isn’t interacting with the hardware or other software on my system in a way it is supposed to. When I try to do things on my own now, any problem is immediately met with an “argh!” followed by a google search hoping to find others who are trying to do what I’m doing and have a solution for it.

When I look at enterprise-class systems today, there’s just no way that some of the old techniques of implementation, configuration, tuning, and maintenance are going to be effective. Systems are getting larger and more complex. Can anybody tell me that they enjoy installing fix packs from a command line or ensuring that all of the software levels are at exactly the right level before proceeding with an installation of some modern piece of software (or multiple pieces that all need to work together, which is fairly typical today)? Or feel extremely confident in getting it all right? And you’ve all heard about the demands placed on IT today by “Big Data”. Most DBAs, system administrators, and other IT staff are just struggling to keep the current systems functioning, not able to give much thought to implementing new projects to handle the onslaught of all this new information. The thought of bringing a new application and database up, especially one that requires high availability and/or scalability, is pretty daunting. As is the work to grow out such a system when more demands are placed on it.

It’s for these reasons and others that IBM introduced PureSystems. Specifically, I’d like to talk here about IBM PureData System for Transactions. It’s an Expert Integrated System that is designed to ensure that the database environment is highly available, scalable, and flexible to meet today’s and tomorrow’s online transaction processing demands. These systems are a complete package and they include the hardware, storage, networking, operating system, database management software, cluster management software, and the tools. It is all pre-integrated, pre-configured, and pre-tested. If you’ve ever tried to manually stand up a new system, including all of the networking stuff that goes into a clustered database environment, you’ll greatly appreciate the simplicity that this brings.

The system is also optimized for transaction processing workloads, having been built to capture and automate what experts do when deploying, managing, monitoring, and maintaining these types of systems. System administration and maintenance is all done through an integrated systems console, which simplifies a lot of the operational work that system administrators and database administrators need to do on a day-to-day basis. What? Didn’t I just say above that I don’t like GUIs? No, I didn’t quite say that. Yeah, I still like those opportunities for hands-on, low-level interactions with a system, but it’s hard not to appreciate something that is going to streamline everything I need to do to manage a system and at the same time keep my “argh” moments down to a minimum. The fact that I can deploy a DB2 pureScale cluster within the system in about an hour and deploy a database in minutes (which, by the way, also automatically sets it up for performance monitoring) with just a few clicks is enough to make me love my mouse.

IBM has recently released some white papers and solution briefs around this system and a couple of them talk to these same points that I mentioned above. To see how the system can improve your productivity and efficiency, allowing your organization to focus on the more important matters at hand, I suggest you give them a read:

Improve IT productivity with IBM PureData System for Transactions solution brief
Four strategies to improve IT staff productivity white paper

The four strategies as described in these papers, that talk to the capabilities of PureData System for Transactions, are:

  • Simplify and accelerate deployment of high availability clusters and databases
  • Streamline systems management
  • Reduce maintenance time and risk
  • Scale capacity without incurring downtime

I suspect that I won’t be changing my command line and management/maintenance habits on my laptop and PCs any time soon, but when it comes to this system, I’m very happy to come out of my cave.

It’s the Most Wonderful Time of the Year

Melanie Stopfer

Melanie Stopfer , Consulting Learning Specialist, Information Management, IBM

It’s getting close to one of my favorite weeks of each year – IDUG North America Technical Conference.  Break out the eggnog and mistletoe.  I feel like a child before Christmas that’s counting the days to open presents.  This year there’s even a bigger present waiting because my 83-year old mother and younger sister live in the Phoenix area. IDUG DB2 2014 Technical Conference will be held at Sheraton Downtown Phoenix Hotel and Convention Center, May 12-16.

IDUG is the foremost independent, user-driven community that provides a direct channel to thousands of professional DB2 users across the globe.  At the conference you gain access to influential decision makers.  You can also raise your profile by gaining a deeper understanding of marketplace needs and the competitive environment.  There are so many opportunities to engage with current and potential customers looking to learn about DB2 and related technologies to improve business performance.

Use the following tidbits to provide justification for your attendance.  Sometimes planning your week can be a little time consuming and for the new attendee a little overwhelming so use the following information to experience a very successful week.

Monday, May 12 – IDUG One Day Seminars
Take a deep dive into specific aspects of DB2 with some of the most renowned speakers in our industry.  You will not only have the opportunity to listen to these speakers but will have the opportunity to ask them specific questions as well.  There is an additional registration fee for these seminars of $425 for paid full IDUG Conference delegates ($475 for unpaid) which includes lunch and a hard copy of the materials. Select from the following session topics:

  • Advanced SQL Coding and Performance (Dan Luksetich)
  • DB2 10 for z/OS System Admin Crammer class for Certification Exam 617 (Judy Nall)
  • DB2 10.1 for Linux, UNIX, and Windows Database Administration Certification Exam (Exam 611) Preparation (Roger Sanders)
  • DB2 11 for z/OS Database Administration Certification Crammer Course (Susan Lawson)
  • DB2 LUW Problem Determination and Troubleshooting Workshop (Pavel Sustr and Samir Kapoor)
  • DB2 V10+ for LUW Top Gun Performance Workshop (Martin Hubel and Scott Hayes)
  • The BEST of the BEST of the THINGS I WISH I”D TOLD YOU 8 YEARS AGO (Bonnie Baker)

Tuesday, May 13 – Keynote Speaker
Donald Feinberg, vice president and distinguished analyst in Gartner Intelligence Information Management group, is responsible for Gartner’s research on database management systems and data warehousing infrastructure and Big Data.  I can’t wait to hear his insights.

Tuesday, May 13 through Friday, May 16 – Attend your selected technical sessions  Build your personalized conference agenda using IDUG’s My Conference feature .  Customize your week at IDUG with technical sessions, technical networking sessions and social networking opportunities. The My Conference feature also allows you to evaluate sessions and the overall conference. You can download your custom agenda or sync with your calendar (Google Calendar, Outlook 2010, etc.) and have the information at their fingertips.  Check out the agenda by technical track, time, or title.  Just click on the title to get speaker bio, session abstract, and session objectives.  Don’t forget about the new track Big Data and Analytics.  You can choose technical sessions from the following tracks:

  • Ala Carte
  • Big Data and Analytics
  • DB2 for Developers
  • DB2 for LUW – I
  • DB2 for LUW – II
  • DB2 for z/OS – I
  • DB2 for z/OS – II

Consider selecting my two presentations:

  • Wed May 14, 8:00 AM, Paradise Valley – C05 Best Practices Upgrade to DB2 10.5 with BLU Acceleration
  • Thu May 15, 1:00 PM, Camelback – D11 Using Row and Column Access Controls (RCAC) with DB2 LUW 

Wednesday, May 14, 3:30-4:45  - Special Interest Groups (SIGS)
Get face-to-face with developers and technical experts and discuss what’s hot.  Where does your special interest lie?  Wish I could clone myself and attend more than one SIG.  Walid Rjaibi, IBM’s Chief Security Architect for Information Management, Rebecca Bond, R Bond Consulting, Inc., and I will co-chair SIG1F – Security and Data Masking.

  • SIG1A – DB2 Memory Usage, Futures and Features
  • SIG1B – Fun with SQL
  • SIG1C – Maintaining Big Data
  • SIG1D – DB2 BLU Acceleration
  • SIG1E – DB2 Compression – What are the options? What are the license implications?
  • SIG1F – Security and Data Masking
  • SIG1G – DB2 in the Cloud (SIGS)

Thursday, May 15, 7:30-8:30 AM – Call for Volunteers Breakfast

IDUG relies on its members to leverage the technical and business acumen of DB2 professionals. It is because of dedicated members that IDUG exists.  To continue to provide the best networking and education to IDUG members worldwide, IDUG is always looking for active volunteers. Take part!  Get involved!  Help build IDUG’s future.

Thursday, May 15, 12:00-1:00 PM – Speaker Feedback Lunch

A great opportunity for speakers to discuss their experiences with the IDUG NA conference team.  Meet for lunch in Encanto A and provide valuable feedback to planning committee to improve the process for next year.

Thursday, May 15, 4:45-6:00 PM – Mini-Panels

You can submit your questions early at the registration help desk.  Always a lively Q&A with lots of technical answers and often hints of what is coming in the future.

  • SIG2A – z/OS Mini Panel
  • SIG2C – LUW Mini Panel (SIGS)

Thursday, May 15, 6:30 PM – DINE AROUND EVENT

FILL IN THE BLANK:  ”10 DBAs walk into a restaurant and ­­­­­­­­­­­­­____________.”
Seriously, the night is your oyster.  What a great opportunity to make new friends and renew previous relationships.  Every year, I have such a great time meeting new people and learning about them.  Don’t miss this event.  Sign up early with your favorite speaker’s restaurant since the limited number of spots go quickly.  Each diner receives a separate check for their charges. Ember Crooks, IBM Champion and db2commerce.com blogger, and I are the speakers assigned to dine at Pizzeria Bianco.  Who wants to join us for Italian?

Friday, May 16, 8:00 AM to 2 PM  –  Complimentary Technical Workshops
Wowza – who doesn’t love a free workshop?  What a great opportunity to build your technical skills!  You register for the workshops when completing your conference registration.

  • DB2 BLU Acceleration for the Cloud Workshop – Learn about the exciting new enhancements with DB2 BLU, how they relate to Cloud implementations and how they can provide data insights to your organization fast and easily.
  • DB2 11 Migration Planning Workshop -  DB2 11 gives back with more out of the box performance improvements, greater migration flexibility, many optimization’s, administration, and application features.  Leave the session with materials that you can use to start your installation/migration immediately, or in the future.

FREE IBM CERTIFICATION EXAMS

All DB2 Tech Conference attendees can take a FREE IBM CERTIFICATION EXAM! Each attendee may take one exam for free and if they pass they are eligible for a second free exam. Additional exams will be offered at the low discounted price of $25. Prepare by attending one of the pre-certification education seminars on Monday listed above. The closing time listed is the last seating for the day. Go early in the week because time slots fill up towards end of the week.  Exams will be administered:

  • Tuesday, May 13: 10:00 – 17:00
  • Wednesday, May 14: 9:00 – 16:00
  • Thursday, May 15: 8:00 – 17:00
  • Friday, May 16: 8:00 – 12:00

Products and Services Expo
Walk the exhibit hall and take advantage of the excellent opportunity to learn more about the latest DB2 technologies.  While you pick up some great souvenirs, absorb the technical content you need to be successful.  Several vendors will be handing out invitations to parties where you can network the nights away.  The expo is open:

  • Tuesday, May 13, 5:30 pm – 7:30 pm
  • Wednesday, May 14, 11:30 am – 1:00 pm, 4:30 pm – 6:00 pm
  • Thursday, May 15, 11:30 am – 1:00 pm

First Time Attendees and IDUG Mentors Discount
Discount!  Did someone mention $$$SAVINGS$$$? The IDUG Mentor program recognizes and helps loyal IDUG attendees, IBM Champions and RUG Leaders share their DB2 knowledge with first time attendees. If you fall into one of these categories you are eligible to apply for a coupon worth 80% off the registration rate for a first time attendee. First time attendees can take advantage of this offer by reaching out to a co-worker that falls into one of these categories. To apply or for more information about the IDUG Mentor Program visit the IDUG Mentor webpage:

Justification for Attending IDUG Conference

Go to the IDUG website and download the Justification Letter and How to Justify list.

Tips

  • Remember to register for a complimentary technical workshop when completing your conference registration.
  • Build your agenda before arriving so that you don’t overlook your favorite speaker or topic.  There is so much happening during the week that you will want to take advantage of every opportunity.
  • IDUG NA Conference home website
  • Build time in your agenda to take at least one complimentary certification exam.  All results are private unless you decide to share the news.
  • Make “lunch dates” with fellow attendees or speakers.  No one likes to eat alone.  Your invitation will make someone’s day. 
  • Bring a light jacket, sweater or wrap.  Convention Centers are really meat lockers in disguise.
  • Sign up to moderate at least one session.  Besides the thread chair becoming your new friend, you get to meet the speaker, and become involved with a great organization.  What a great and easy way to get involved!
  • Pack extra business cards.
  • Use the #idug hashtag in your Tweets.
  • On Friday technical sessions end at 11:45 AM and complimentary technical workshops end at 2:00 PM.

If you are interested in networking opportunities with fellow users, product developers and solution providers, user-driven product training and quality continuing education, IDUG DB2 Technical Conference is for you!

Watch out Santa because I’m opening my presents early – IDUG is coming to Phoenix.  See you there.

——–

Melanie Stopfer is a Consulting Learning Specialist and Developer for IBM Software Group. She is recognized worldwide as an advanced database specialist. As a Certified DB2 LUW 10.5 Technical Expert and Learning Facilitation Specialist, she has provided in-depth technical support to IM customers specializing recovery, performance and database upgrade and migration best practices since 1988. In 2009, Melanie was the first DB2 LUW speaker to be inducted into the IDUG Speaker Hall of Fame and was again selected Best Overall Speaker at IDUG EMEA 2011 and 2012 and IDUG NA 2012. In 2013, IOD rated her presentations in top two for customer satisfaction.

Connect with Melanie on :

Follow

Get every new post delivered to your Inbox.

Join 34 other followers