August Tweetchat – How Important Are Transactions In a Big Data World?

By Susan Visser

Join us for the #Transactions Twitterchat on “How important are transactions in a big data world?” Aug 20, from 11:00 am to 12:00 pm ET.

Our special guests for this Twitterchat are:

  • Scott Hayes – @srhayes
  • Craig Mullins  @craigmullins
  • Kelly Schlamb  @KSchlamb

 You can follow along—and join the discussion using the hashtag #Transactions.   We look forward to your thoughts and comments on #Transactons twitterchat hosted by @IBM_DB2 (Susan Visser).

How do you join in?

If you use a Twitter client like Tweetdeck or HootSuite, create a search column for the term #Transactions. Then as participants tweet with the #Transactions hashtag, those tweets will appear in your column.

How do you participate?

Review the discussion questions so you can prepare your thoughts and answers. When the question is posed, begin your response with A1: for question 1 and A2: for question 2, and so on. This makes it easier to follow the conversation throughout the chat. No answer is wrong!

Discussion Questions

  • Q1: How do you define “transaction”? Online purchase, sensor reading, CRM/ERP update, log entry, web click, social post?
  • Q2: Who is impacted, and how, when transaction systems go down or perform poorly?
  • Q3: How should Service Level Agreements address transaction system availability and performance?
  • Q4: What are the most important database capabilities for high availability and consistent performance?
  • Q5: When you need to make changes for performance or growth, how does the database scale? Do applications need to change?
  • Q6: How many DBAs does it take to …keep a database up and running smoothly? …respond to problems?
  • Q7: How can you minimize planned downtime for backups, maintenance, upgrades?
  • Q8: If an unplanned outage or failure happens, what is an acceptable recovery time?

Reading List

More about our panelists

Craig Mullins

Craig is president & principal consultant of Mullins Consulting, Inc., a principal with SoftwareOnZ and the publisher/editor of The Database Site, His working experience spans multiple industries.   He’s written multiple books and blogs at: http://datatechnologytoday.wordpress.com/

Scott Hayes

Scott Hayes is President & CEO of Database-Brothers Inc (DBI), a company intensely focused on IBM DB2 UDB LUW and Oracle database solutions. Scott is an IBM DB2 GOLD Consultant and has received a number of DB2 Certifications.  Scott hosts The DB2Night Show series of webinars.  Check the website http://www.dbisoftware.com/db2nightshow/ where you can found hundreds of hours of free DB2 education.

Kelly Schlamb

Kelly works in the IBM Software Group, Information Management as an Executive IT Specialist in WW IM Technical Sales.  His specialties are DB2 pureScale & PureData System for Transactions.  Kelly has written several blog entries on the IBM DB2 Blog site.

Tweetchat on Fraud Prevention in Banking

RadhaBy Radha Gowda
Product Marketing Manager, DB2 and related offerings

On August 7 ‘14, at 11 AM EDT, IBM Data Management team is privileged to have Robert L. Palmer, James Kobielus, and Wilson Davis join us on a tweetchat to share their expertise on #FraudPrevention in Banking.  Some topics that we shall be soliciting your opinion(s) on are:

  • Q1: Are fraudulent activities in banking increasing or decreasing? Why?
  • Q2: What are some key business impacts of fraud?
  • Q3: What measures can be taken to identify potential fraudulent transactions?
  • Q4: What analytics do you need to detect fraud?
  • Q5: What data sources can contribute to the analytics?
  • Q6: How can your systems analyze transactions as they occur?
  • Q7: How can new technologies such as in-memory analytics help in fraud detection?
  • Q8: Where can I learn more?

Here’s what you need to do to join our conversation to contribute or just listen:

  • Go to twubs.com or tweetdeck.com
  • Sign in with your twitter handle
  • Search on #FraudPrevention
  • A new window will open that makes it easy for you to follow and contribute.

If you plan to contribute to our tweetchat, please review the tips at slideshare since the chat can be very fast paced. Suggested resources relevant to the topic include:

  1. How to Mitigate Fraud and Cyber Threats with Big Data and Analytics
  2. IBM data management for banking
  3. Best practices to deploy IBM Banking Data Warehouse model to #IBMBLU for production
  4. Attract and retain customers with always-on digital mobile banking services
  5. Fight against fraud in real-time and save on operating expenses
  6. Customize offers to your clients with the data already at your fingertips
  7. World’s top 5 most secure bank is becoming more strategic and more profitable
  8. Regulatory reporting headaches? See how @Handelsbanken solved their reporting challenges

Tweetchat-AugustMore about our panelists:

Robert L. Palmer (@bigdatabusinessGlobal Banking Industry Marketing, Big Data, IBM

Bob’s expertise is applying B2B software to optimize key business processes.  He is a subject matter expert in financial services, and writes about business challenges, Big Data, analytics, CRM, cognitive computing, and information management.

James Kobielus   (@jameskobielus) Senior Program Director, Big Data Analytics, IBM

James is a popular speaker and thought leader in big data, Hadoop, enterprise data warehousing, advanced analytics, business intelligence, data management and next best action technologies.

Wilson Davis (@wilsondavisibm) Executive Technical Consultant – Counter Fraud iCoC, IBM

Wilson’s specialties include financial and operational data analytics, counter-fraud and anti-money laundering, straight-through-processing, and game changing improvements in business processes and application systems for the financial services industry.

The data advantage: Creating value into today’s digital world

IBM Institute for Business Value is looking to understand how organizations around the globe are creating business value from analytics. If you can spare a few minutes to participate in the survey, you’d be the first to receive a copy of the study when it is released in October 2014.  2014 Analytics Survey

Follow Radha on Twitter @rgowda

When Your Database Can’t Cut It, Your Business Suffers

larry By Larry Heathcote
Program Director, IBM Data Management

 

Your database is critical to your business. Applications depend on it. Business users depend on it. And when your database is not working well, your business suffers.

IBM DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

We’ve all heard the phrase “garbage in, garbage out,” and this is so true in today’s big data world. But it’s not just about good data; it’s also about the infrastructure that captures and delivers data to business applications and provides timely and actionable insights to those who need to understand, to make decisions, to act, to move the business forward.

 

It’s one thing to pull together a sandbox to examine new sources of data and write sophisticated algorithms that draw out useful insights. But it’s another matter to roll this out into production where Line of Business users depend on good data, reliable applications and insightful analytics. This is truly where the rubber meets the road – the production environment…and your database better be up to it.

Lenny Liebmann, InformationWeek Contributing Editor, and I recorded a webinar recently titled “Is Your Database Really Ready for Big Data.” And Lenny posted a blog talking about the role of DataOps in the modern data infrastructure. I’d like to extend this one more step and talk about the importance of your database in production. The best way I can do that is through some examples.

 

1: Speed of Deployment

ERP systems are vital to many companies for effective inventory management and efficient operations. It is important to make sure that these systems are well tuned, efficient and highly available, and when a change is needed that it be done quickly. Friedrich ran the SAP environment for a manufacturing company, and he was asked to improve the performance of applications that were used for inventory management and supply chain ops. More specifically, he needed to replace their production database with one that improved application performance but kept storage growth to a minimum. Knowing that time is money, his mission was to deploy the solution quickly, which he did… 3 hours up and running in a production environment with more than 80 percent data compression and 50x performance improvement. The business impact – inventory levels were optimized, operating costs were reduced and the supply chain became far more efficient.

 

2: Performance

Rajesh’s team needed to improve performance of an online sales portal that gave his company’s reps the ability to run sales and ERP reports from their tablets and mobile phones out in the field. Queries were taking 4-5 minutes to execute, and this simply was not acceptable – btw, impatience is a virtue for a sales rep. Rajesh found that the existing database was the bottleneck, so he replaced it. With less than 20 hours of work, it was up and running in production with a 96.5 percent reduction in query times. Can you guess the impact this had? Yep, sales volumes increased significantly, Rajesh’s team became heroes and the execs were happy. And, since reps were more productive, they were also more satisfied and rep turnover was reduced.

 

3: Reliability, Availability and Scalability

In today’s 24x7x365 world, transaction system downtime is just not an option. An insurance company was having issues with performance, availability, reliability and scalability needed to support the company’s rapid growth of insurance applications. Replacing their database not only increased application availability from 80 to 95 percent, but they also saw a dramatic improvement in data processing times even after a 4x growth in the number of concurrent jobs … and, decreased their total cost of ownership by 50 percent. The company also saw customer satisfaction and stickiness improve.

These significant results happened because these clients upgraded their core database to IBM DB2. DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

To learn more, watch our webinar.

Follow Larry on twtter at @larryheathcote

 

Join Larry and Lenny on a Tweet Chat on June 26 11 ET.  Join the conversation using #bigdatamgmt.  For the questions and more details see: http://bit.ly/Jun26TweetChat

A Dollar Saved Is Two Dollars Earned. Over A Million Dollars Saved Is?

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

A refreshing new feeling because DB2 can offer your business 57% improvement in compression, 60% improvement in processing times, and 30-60% reduction in transaction completion time.

Coca-Cola Bottling Co. Consolidated (CCBCC) was faced with severe business challenges: the rising cost of commodities and sharply higher fuel prices cannot be allowed to impact consumers of its world-famous sodas.  At the time of an SAP software refresh, the CCBCC IT team reviewed the company’s database strategy and discovered that migrating to IBM DB2 offered significant cost savings.  DB2 has delivered total operating cost reductions of more than $1 million over four years. And, DB2 10 has continued to be a compression workhorse, delivering another 20 % improvement in compression rate.

Staying competitive in a tough market

Andrew Juarez, Lead SAP Basis and DBA at CCBCC, notes:   “We happen to be in a market where we are considered an expendable item. In other words, it is not something that is mandatory. So we cannot push the price off to our customers to offset any losses that we may have, which means that we need to be very competitive on how we price our product.

Making the move to IBM DB2

Tom DeJuneas, IT Manager at CCBCC, states:   “We did a cost projection, looking at the cost of Oracle licenses and maintenance fees, and calculated that we could produce around $750,000 worth of savings over five years by switching to IBM DB2. We also undertook a proof-of-concept phase, which showed that IBM DB2 was able to offer the same, and potentially more, functionality as an Oracle system.”

Moving from Oracle has brought about a significant change in the IT organization’s strategy, as Andrew Juarez explains:   “When we were on Oracle, our philosophy was that we did not upgrade unless we were doing a major SAP upgrade. If the version was stable, then we stayed on it. Now, with IBM DB2 our strategy has completely changed, because with every new release our performance keeps getting better and better, and the value of the solution continues to grow.

Fast, accurate data

IBM DB2 manages key data from SAP® ERP modules such as financials, warehouse management, materials management and customer data.  Tom DeJuneas states, “Many of our background jobs and online dialog response times have improved considerably. For example, on the first night after we performed the switchover, one of our plant managers reported that jobs that normally took 90 minutes to run were running in just 30 minutes. This was simply by changing the database. So we had a massive performance increase in supply chain batch runs right from the get-go.

Impressive cost savings

IBM DB2 has helped CCBCC to make better use of its existing resources, delaying costly investment in new hardware and freeing up more money for investment in other projects.

Originally, when we did our business case for moving to IBM DB2, it was built around the savings on our Oracle licenses and maintenance, and that was it,” notes Andrew Juarez. “We did not factor in disk savings, so the fact that we are seeing additional savings around storage is icing on the cake. We had originally projected about $750,000 savings over five years and to date we are at four years and have seen a just over a million dollars in savings after migrating to IBM DB2. So we have bettered our original estimate by more than 25 percent.”

Tom DeJuneas concludes, “At CCBCC it is very important for us to stay on the frontline of innovation, and technology like IBM DB2 helps us to do that. Based on our experience, I do not see why anyone running SAP would use anything other than IBM DB2 as its database engine.”

Download CCBCC migrates to IBM DB2, saves more than $1 million for complete details.

For new insights to take your business to the next level and of course, cost savings, we invite you to try the DB2 with BLU Acceleration side of life.

Cost/Benefit Comparison of DB2 10.5 and Oracle Offerings

Danny Arnold

Danny Arnold ,  Worldwide Competitive Enablement Team

As part of the IBM Information Management team, I’m often asked to describe the advantages of DB2 10.5 over the Oracle offerings (Oracle Database 12c, Oracle Exadata, and other products). There are many reasons to choose DB2 10.5 over Oracle Database from a business value standpoint including licensing costs, technology advantages in the areas of compression, continuous availability with DB2 pureScale, and of course, the latest innovation, BLU Acceleration.  BLU Acceleration combines columnar, memory optimization, and other technologies to deliver fast analytic query results and excellent compression along with greatly reduced administration. However, from a client perspective, it is someone from IBM stating that DB2 10.5 delivers all of these things (and we are probably a little biased).

Therefore, it was nice to read the recently published report from International Technology Group (ITG)  Cost/Benefit Case for IBM DB2 10.5 for High Performance Analytics and Transaction Processing Compared to Oracle Platforms

This report describes both the high performance analytics and transactional processing application areas and highlights the advantages of DB2 10.5 over the Oracle offerings. There are a number of key pieces of information that this report brings to light including:

  • DB2 10.5 provides a lower total operating cost of ownership than Oracle
    • 28% to 34% lower 3-year TCO for transactional processing
    • 54% to 63% lower 3-year TCO for high performance analytic
  • Faster deployment with an average deployment time of 57 days LESS than an Oracle based solution
  • 5.3X better compression rates for DB2 over Oracle (12.6X for DB2 versus 7.3X for Oracle)

The ITG report provides details in the areas of technology differentiators for high performance analytics where Oracle Database does not use Massively Parallel Processing (MPP) or an in-memory RAM based approach like DB2 with BLU Acceleration or SAP HANA, but provides another level of processing within the Exadata Storage Servers. So there is no built-in capability within the Oracle Database to efficiently process analytics in a high performance manner. Instead, the client must purchase an Oracle Exadata engineered system to gain any performance advantages for an analytics workload.  ITG continues in their analysis of the Oracle Exadata system for analytics by stating:

The hybrid design has two important implications:
 1. The overall environment is complex – administrators, for example, must deal with partitioned Oracle databases, RAC   and Exadata-specific hardware and software features
2. Use of system resources is inefficient. High levels of system overhead are generated.
Exadata may be characterized as a “brute force” design. Because systems must compensate for overhead, the considerable processing power offered by this platform does not translate directly into application level performance.

 The ITG report wraps up its discussion of technology differentiators describing how DB2 with BLU Acceleration takes advantage of the multiple processor cores available in today’s Intel and Power environments along with the other BLU technology advantages to deliver an average of 31.6X better performance versus the smaller average performance gain of 5.5X experienced by Oracle Exadata clients.

During the ITG discussion of complexity between DB2 and Oracle within the report, the findings were even more favorable to DB2. Oracle Exadata administrators have to develop skills in system and storage to augment their Oracle Database DBA skills. The average Oracle Exadata administrative task breakdown is 60% database administration and 40% system and storage administration. In ITG discussions with clients, Oracle Exadata systems required an average of 0.8 FTE (full time equivalents) per Exadata system for administration versus the DB2 with BLU Acceleration average of 0.25 FTE. This complexity difference between the two environments was highlighted in the deployment time comparison, with an average deployment time of 38 days for DB2 with BLU Acceleration versus an average deployment time of 95 days for Oracle Exadata. An interesting side note to this deployment time difference is that most Oracle Exadata deployments (over 60%) were performed by clients that were already Oracle Database owners and experienced with the Oracle Database.

The ITG report covers packaging and pricing and provides many tables and graphs highlighting the differences between DB2 10.5 and Oracle. If you are interested in learning more details about DB2 10.5 and its cost benefits over Oracle, I urge you to download and read the ITG report.

AreYouSeeingRed-50

Whatever you do, stay uncompromised!

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

That is the new marketing campaign for all new Audi A3 – a luxury without compromise at an affordable price. How does Audi AG accomplish that? By creating super-efficient business processes for a start.

Audi AG, facing growing competition from global auto companies, teamed up with IBM and implemented server virtualization using IBM PowerVM® and migrated over 100 SAP systems from HP-UX with Oracle 10g database to IBM AIX 6.1 with IBM DB2 9.7. With the IBM private cloud ready infrastructure underpinning its SAP systems, Audi now has a robust, flexible, and high-performance platform for managing its business operations. As the company tackles increasing competition, the ability to expand and contract its SAP solutions in line with changing demands will help Audi to ensure that it has the right IT resources in place, at the right cost of ownership.

Need for a more sustainable infrastructure

With over eight production plants across the world and increasing competition, the IT Services Department at Audi needed to help the company address multiple business challenges – increasing demands from employees, customers and suppliers, variable sales volumes, rising cost pressures, and the need for new technologies. They envisioned a flexible virtualized infrastructure that is more flexible and sustainable, and equally important was the availability and performance of over 100 business-critical SAP systems.

“IBM created an environment that enabled us to compare DB2 9.7 with a reorganized Oracle 10g database, both running on IBM Power Systems servers. The results were clear: storage savings in the range of 50 to 70 percent, and much higher performance when using DB2.”
– Markus Wierl,Service Owner of SAP Infrastructure AUDI AG

Cloud for flexibility and speed

Teaming with IBM, Audi implemented server virtualization using IBM PowerVM® and migrated over 100 SAP systems, including more than 30 SAP landscapes and 26 high-availability clusters, from HP-UX with Oracle 10g database to IBM AIX 6.1 with IBM DB2 9.7. The new SAP landscape runs on dual-data-center, symmetrically implemented ‘private cloud ready’ infrastructure, with the infrastructure hosted by Audi and managed by IBM. Virtualization enabled Audi to pack a large number of separate business systems onto a small number of physical servers, pushing up utilization and eliminating costly unused capacity. Rather than having each logical system tied to a particular physical server, and only able to expand through the physical addition of new hardware, Audi can now reallocate resources on the fly from one system to another as required, and respond faster and more cost-effectively to new business requirements.

This really was an impressive accomplishment by the IBM team: migrating more than 100 SAP systems to a completely new operating system and database in six months and with no disruption.” — Markus Wierl

Supporting business excellence

We trust the IBM infrastructure to run our production systems, which are absolutely business-critical,” says Markus Wierl “Any significant unplanned downtime could lead to a stoppage on our production lines. Modern automotive manufacturing is based on just-in-time concepts, and involves a large and complex partner ecosystem. So any minor disruption to production can rapidly turn into a major problem for multiple parties. For this reason, we highly value the robustness and availability of the IBM Power Systems and BladeCenter technology for our SAP solutions.

Download how Audi gears up for continued success with IBM private cloud for complete details.

Migration is a factor in natural selection. Stay uncompromised as you evolve your data management systems.

Follow Radha on Twitter @rgowda

Simplifying Oracle Database Migrations

Danny Arnold

Danny Arnold ,  Worldwide Competitive Enablement Team

As part of my role in IBM Information Management, as a technical advocate for our DB2 for LUW(Linux , Unix, Windows) product set, I often enter into discussions with clients that are currently using Oracle Database.

With the unique technologies delivered in the DB2 10 releases (10.1 and 10.5), such as

  • temporal tables to allow queries against data at a specific point-in-time,
  • row and column access control (RCAC) to provide granular row and column level security that extends the traditional RDBMS table privileges for additional data security, pureScale for near continuous availability database clusters,
  • database partitioning feature (DPF) for parallel query processing against large data sets (100s of TBs), and
  • the revolutionary new BLU Acceleration technology to allow analytic workloads to use column-organized tables to deliver performance orders of magnitude faster than conventional row-organized tables,

many clients like the capabilities and technology that DB2 for LUW provides.

However, a key concern is the level of effort to migrate an existing Oracle Database environment to DB2 .  Although DB2  provides Oracle compatibility and has had this capability built into the database engine since the DB2 9.7 release, there is still confusion on the part of clients as to what this Oracle compatibility means in terms of a migration effort.  Today, DB2 provides a native Oracle PL/SQL procedural language compiler, support for Oracle specific ANSII SQL language extensions, Oracle SQL functions, and Oracle specific data types (such as NUMBER and VARCHAR2).  This compatibility layer within DB2 allows many Oracle Database environments to be migrated to DB2 with minimal effort. Many stored procedures and application SQL that are used against Oracle Database can run unchanged against DB2 reducing both the migration effort and migration risk, as the application did not have to be modified. So the testing phase is much less effort than for a changed or heavily modified application SQL and stored procedures. Although the migration effort seems relatively straight forward, there are still questions that come up with clients and there is the need for a clear explanation of the Oracle Database to DB2 migration process.

Recently, a new solution brief entitled “Simplify your Oracle database migrations” published by IBM Data Management , provides a clear explanation of how DB2 and the PureData for Transactions appliance built upon DB2 pureScale can deliver a clustered database environment for migrating an Oracle database to DB2.  This brief provides a clear and concise overview of what an Oracle to DB2 migration requires and the assistance and tooling available from IBM to make a migration straightforward for a client’s environment.  The brief provides a concise description of the IBM tooling, IBM Database Conversion Workbench, which is available to assist a client in moving their tables, stored procedures, and data from Oracle to DB2.

The fact that DB2 for LUW makes migrating from Oracle a task that takes minimal effort, due to the Oracle compatibility built into DB2, is complemented by the PureData for Transactions system. PureData for Transactions provides an integrated, pre-built DB2 pureScale environment that allows a pureScale instance and a DB2 clustered database to be ready for use in a matter of hours. This helps simplify the implementation and configuration experience for the client. Combining the ease of Oracle migration to DB2 with the rapid implementation and configuration possible with PureData for Transactions, provides a winning combination for a client looking for a more cost effective and available alternative to the Oracle Database.

Fraud detection? Not so elementary, my dear! (Part 2)

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

The first part of this blog gave an overview of IBM Watson Foundation portfolio and DB2 solutions for financial fraud detection.  In this part, we’ll go over DB2 Warehouse features that help detect fraud in near-real-time.

Figure 1: DB2 warehouse for operational analytics

DatawarehouseOA

Data warehouses integrate data from one or more disparate sources to provide a single view of the business and have that single repository available to all levels of the business for analysis. To support today’s workloads, the data warehouse architecture must optimize both traditional deep analytic queries and shorter transactional type queries. It must be able to scale out under data explosion without compromising on either performance or storage. And, it must have the capacity to load and update data in real-time.  DB2 for Linux, UNIX and Windows offers you all these capabilities and more to help you build a scalable and high performing warehouse for near real-time fraud detection.

DB2 warehouse components are organized into six major categories as shown in Figure 2.  We shall discuss only the highlighted ones that help make near-real-time fraud detection a reality.

Figure 2: Warehouse components available in DB2 advanced editions

warehousecomps

As we discussed before, fraud detection is knowledge intensive. It involves sifting through vast amount of data to identify and verify patterns, and construct fraud models to help with real-time detection of fraudulent activity.

Embedded Analytics
DB2 offers embedded analytics, in the form of OLAP and data mining.

Data Mining enables you to analyze patterns and make predictions. Unlike solutions that require end users to extract data from the warehouse, independently analyze it and then send the results back to the warehouse, DB2 provides embedded data mining, modeling, and scoring capabilities.

Modeling– the process starts with historical data being gathered and put through a series of mathematical functions to classify, cluster and segment the data. It automatically finds associations and business rules in the data, which may signify interesting patterns (imagine customers’ credit card purchasing patterns). The business rules are then collected together into a model, which can have a few or tens of thousands of rules.

Visualization helps analysts evaluate the business rules to make sure that they are accurate.

Scoring involves applying the verified business rules to current data to help predict transactions that are likely to be fraudulent in real time.

For example, consider credit card spending patterns outside the norm. While outlier rules (detecting deviations in large data sets) can be applied to a banking transaction when it enters the system to help predict whether it is fraudulent, outlier handling is not usually automatic.  An expert needs to take a closer look to decide whether to take action or not.  This is where Cognos comes to help – to generate reports to visualize the outliers so a human expert can understand the nature of an outlier.

DB2 supports standard data mining model algorithms such as clustering, associations, classification and prediction; additional algorithms may be imported in industry-standard Predictive Model Markup Language (PMML) format from other PMML-compliant data mining applications including SAS and SPSS. This capability enables high-volume, high-speed, parallelized scoring of data in DB2 using third-party models.

Cubing Services provide decision makers a multidimensional view of data stored in a relational database. It supports OLAP capabilities within the data warehouse and simplifies queries that run against large and complex data stores. The multidimensional view of data leads to easier discovery and understanding of the relationships in your data for better business decisions. In addition, Cubing Services cubes are first-class data providers to the CognosBusiness Intelligence platform for incorporating predictive and analytic insights into Cognos reports.

Unstructured Data – up to 80 percent of the data within an organization is unstructured. DB2 can extract information from your unstructured business text and correlate it with your structured data to increase business insight into customer issues. DB2 also allows you to process unstructured data and create multidimensional reports using OLAP capabilities.  In addition, unstructured data can be integrated into data mining models to broaden predictive capabilities.

DB2 Spatial Extender allows you to store, manage, and analyze spatial data in DB2, which along with business data in a data warehouse helps with fraud analysis.

Temporal Data helps you implement time-based queries quickly and easily. Historical trend analysis and point-in-time queries can be constructed by using the history tables and SQL period specifications that are part of the database engine.

Performance Optimization

Database Partitioning Feature (DPF) for row based data store– As data volume increases over time, the data might become skewed and fragmented, resulting in decreased performance. DPFdistributes table data across multiple database partitions in a shared-nothing manner in which each database partition “owns” a subset of the data. It enables massive parallel processing by transparently splitting the database across multiple partitions and using the power of multiple servers to satisfy requests for large amounts of information.  This architecture allows databases to grow very large to support true enterprise data warehouses.

Data Movement and Transformation

Continuous Data Ingest (CDI) allows business-critical data to be continually loaded into the warehouse without the latency associated with periodic batch loading. It allows the warehouse to reflect the most up-to-date information and can help you make timely and accurate decisions. Consider for example, receiving a lost credit card log, a potential credit card fraud alert, from the call center. Such an event is ingested into the warehouse immediately rather than wait until a batch load occurs on pre-defined intervals.  Using such contextual information along with account transaction data can help in real-time fraud detection.

In fact, after experiencing just how beneficial CDI feature is, some of our clients have renamed their Extract, Transform, and Load (ETL) processes to Extract, Transform, and Ingest (ETI).

All these features are available in DB2 advanced editions and IBM PureData System for Operational Analytics to help you deliver near-real-time insights.

Now, are you meeting the service level agreements for performance while trying to prevent fraud in real time? Not sure? Why don’t you give DB2 with BLU Acceleration or other IBM Data Management solutions a try?  Perhaps IBM Data Management Solutions can help you achieve your business objectives.

Yes, fraud detection is not so elementary. But with the right clues, I mean with the right software and tools, it could be made elementary.

Follow Radha on Twitter @rgowda

Read the IBM Data Management for Banking whitepaper for more information on how IBM can help banks gain a competitive edge!

Achieving High Availability with PureData System for Transactions

KellySchlamb

Kelly Schlamb , DB2 pureScale and PureData Systems Specialist, IBM

A short time ago, I wrote about improving IT productivity with IBM PureData System for Transactions and I mentioned a couple of new white papers and solution briefs on that topic.  Today, I’d like to highlight another one of these new papers: Achieving high availability with PureData System for Transactions.

I’ve recently been meeting with a lot of different companies and organizations to talk about DB2 pureScale and PureData System for Transactions, and while there’s a lot of interest and discussion around performance and scalability, the primary reason that I’m usually there is to talk about high availability and how they can achieve higher levels than what they’re seeing today. One thing I’m finding is that there are a lot of different interpretations of what high availability means (and I’m not going to argue here over what the correct definition is). To some, it’s simply a matter of what happens when some sort of localized unplanned outage occurs, like a failure of their production server or a component of that server. How can downtime be minimized in that case?  Others extend this discussion out to include planned outages, such as maintenance operations or adding more capacity into the system. And others will include disaster recovery under the high availability umbrella as well (while many keep them as distinctly separate topics — but that’s just semantics). It’s not enough that they’re protected in the event of some sort of hardware component failure for their production system, but what would happen if the entire data center was to experience an outage? Finally (and I don’t mean to imply that this is an exhaustive list — when it comes to keeping the business available and running, there may be other things that come into the equation as well), availability could also include a discussion on performance. There is typically an expectation of performance and response time associated with transactions, especially those that are being executed on behalf of customers, users, and business processes. If a customer clicks on button on a website and it doesn’t come back quickly, it may not be distinguishable from an outage and the customer may leave that site, choosing to go to a competitor instead.

It should be pointed out that not every database requires the highest levels of availability. It might not be a big deal to an organization if a particular departmental database is offline for 20 minutes, or an hour, or even the entire day. But there are certainly some business-critical databases that are considered “tier 1″ that do require the highest availability possible. Therefore, it is important to understand the availability requirements that your organization has.  But I’m likely already preaching to the choir here and you’re reading this because you do have a need and you understand the ramifications to your business if these needs aren’t met. With respect to the companies I’ve been meeting with, just hearing about what kinds of systems they depend on from both an internal and external perspective- and what it means to them if there’s an interruption in service- has been fascinating.  Of course, I’m sympathetic to their plight, but as a consumer and a user I still have very high expectations around service. I get pretty mad when I can’t make an online trade, check the status of my travel reward accounts, or even order a pizza online ; especially when I know what those companies could be doing to provide better availability to their users.  :-)

Those things I mentioned above — high availability, disaster recovery, and performance (through autonomics) — are all discussed as part of the paper in the context of PureData System for Transactions. PureData System for Transactions is a reliable and resilient expert integrated system designed for high availability, high throughput online transaction processing (OLTP). It has built-in redundancies to continue operating in the event of a component failure, disaster recovery capabilities to handle complete system unavailability, and autonomic features to dynamically manage utilization and performance of the system. Redundancies include power, compute nodes, storage, and networking (including the switches and adapters). In the case of a component failure, a redundant component keeps the system available. And if there is some sort of data center outage (planned or unplanned), a standby system at another site can take over for the downed system. This can be accomplished via DB2’s HADR feature (remember that DB2 pureScale is the database environment within the system) or through replication technology such as Q Replication or Change Data Capture (CDC), part of IBM InfoSphere Data Replication (IIDR).

Just a reminder that the IDUG North America 2014 conference will be taking place in Phoenix next month from May 12-16. Being in a city that just got snowed on this morning, I’m very much looking forward to some hot weather for a change. Various DB2, pureScale, and PureData topics are on the agenda. And since I’m not above giving myself a shameless plug, come by and see me at my session: A DB2 DBA’s Guide to pureScale (session G05). Click here for more details on the conference. Also, check out Melanie Stopfer’s article on IDUG.  Hope to see you there!

Fraud detection? Not so elementary, my dear.

Radha

Radha Gowda, Product Marketing Manager, DB2 and related offerings

Did you know that fraud and financial crime has been estimated at over $3.5 trillion annually1?  Identity theft alone cost Americans over $24 billion i.e. $10 billion more than all other property crimes2?  And, 70% of all companies have experienced some type of fraud3?

While monetary loss due to fraud is significant, the loss of reputation and trust can be even more devastating.  In fact, according to a 2011 study by Ponemon Institute, organizations lose an average of $332 million in brand value in the year following a data breach. Unfortunately, fraud continues to accelerate due to advances in technology, organizational silos, lower risks of getting caught, weak penalties, and economic conditions.  In this era of big data, fraud detection needs to go beyond traditional data sources i.e. not just transaction and application data, but also machine, social, and geospatial data for greater correlation and actionable insights. The only way you can sift through vast amount of structured and unstructured data and keep up with the evolving complexity of fraud is through smarter application of analytics to identify patterns,construct fraud models, and conduct real-time detection of fraudulent activity.

 IBM Watson Foundation portfolio for end-to-end big data and analytics needs

watson

While IBM has an impressive array of offerings addressing all your big data and analytical needs, our focus here is on how DB2 solutions can help you develop and test fraud models, score customers for fraud risk, and conduct rapid, near-real-time analytics to detect potential fraud.  You have the flexibility to choose the type of solution that best fits your needs – select software solutions to take advantage of your existing infrastructure or choose expert-integrated appliance-based solutions for simplified experience and fast time to value.

Highly available and scalable operational systems for reliable transaction data

DB2 for Linux, UNIX and Windows software is optimized to deliver industry-leading performance across multiple workloads – transactional, analytic and operational analytic – while lowering administration, storage, development, and server costs.  DB2 pureScale, with its cluster based, shared disk architecture, provides application transparent scalability beyond 100 nodes, helps achieve failover between two nodes in seconds, and offers business continuity with built-in disaster recovery over distances of a thousand kilometers.

IBM PureData System for Transactions, powered by DB2, is an expert integrated server, storage, network, and tools selected and tuned specifically for the demands of high-availability , high-throughput transactional processing—so you do not have to research, purchase, install, configure and tune the different pieces to work together. With its pre-configured topology and database patterns, you can set up high availability cluster instances and database nodes to meet your specific needs and deploy the same day rather than spend weeks or months. As your business grows, you can add new databases in minutes and manage the whole system using its intuitive system management console.

Analytics for fraud detection

 DB2 Warehouse Analytics  DB2 advanced editions offer capabilities such as online analytical processing (OLAP), continuous data ingest, data mining, and text analytics that are well-suited for real-time enterprise analytics and can help you extract structured information out of previously untapped business text.  Its business value in enabling fraud detection is immense.

IBM PureData System for Operational Analytics, powered by DB2, helps you deliver near-real-time insights with continuous data ingest and immediate data analysis.  It is reliable, scalable, and optimized to handle 1,000s of concurrent operational queries with outstanding performance. You can apply fraud models to identify suspicious transactions while they are in progress, not hours later. This can apply across any industry segment, including financial services, health care, insurance, retail, manufacturing, and government services.  PureData System for Operational Analytics helps with not just real-time fraud detection, but also cross-sell or up-sell offers/services identifying customer preferences, anticipating their behavior, and predicting the optimum offer/server in real-time.

DB2 with BLU Acceleration, available in advanced DB2 editions, uses advanced in-memory columnar technologies to help you analyze data and generate new insights in seconds instead of days.  It can provide performance improvements ranging from 10x to 25x and beyond, with some queries achieving 1,000 times improvement4,  for analytical queries with minimal tuning.  DB2 with BLU Acceleration is extremely simple to deploy and provides good out-of-the-box performance for analytic workloads. From a DBA’s perspective, you simply create table, load and go. There are no secondary objects, such as indexes or MQTs that need to be created to improve query performance.

DB2 with BLU Acceleration can handle terabytes of data to help you conduct customer scoring across your entire customer data set, develop and test fraud models that explore a full range of variables based on all available data.  Sometimes creating a fraud model may involve looking at 100s of terabytes of data, where IBM® PureData™ System for Analytics would fare better.  Once a fraud model is created, you can use DB2 with BLU Acceleration to apply fraud model to every transaction that comes in for speed of thought insight.

IBM Cognos® BI  DB2 advanced editions come with 5 user licenses for Cognos BI, which enable users to access and analyze the information consumers need to make the decisions that lead to better business outcomes.  Cognos BI with Dynamic Cubes, in-memory accelerator for dimensional analysis,enables high-speed interactive analysis and reporting over terabytes of data.  DB2 with BLU acceleration integrated with Cognos BI with Dynamic Cubes offers you a fast-on-fast performance for all your BI needs.

With the array of critical challenges facing financial institutions today, smarter are the ones that successfully protect their core asset – data. IBM data management solutions help you integrate information, generate new insights to detect and mitigate fraud. We invite you to explore and experience DB2 and the rest of Watson foundation offerings made with IBM.

Stay tuned for the second part of this blog that will explore the product features in detail.

1 ACFE 2012 report to the nations
2 BJS 2013 report on identity theft
3Kroll 2013/2014 global fraud report

4 Based on internal IBM tests of analytic workloads comparing queries accessing row-based tables on DB2 10.1 vs. columnar tables on DB2 10.5. Results not typical. Individual results will vary depending on individual workloads, configurations and conditions, including size and content of the table, and number of elements being queried from a given table.

Follow Radha on Twitter @rgowda

 

Follow

Get every new post delivered to your Inbox.

Join 34 other followers