Blog Archives

The Good Ghosts of Data Virtualization

file 40 christmas-carol
I’m a datum, tra-la, tra-la, just humming along, happier than most of my colleagues, now that Data Virtualization has come to my life.  I’m what they call a Master data, or rather a source of record, which has been quite a challenge, since I used to get cloned over and over, morphed, and turned upside down. That took lots of time and seemed to be necessary because they couldn’t quite get to ME, the original: unvarnished, and clean as a whistle. My colleagues still deal with this and are copied to staging databases all the time, leaving many places where versions reside. There is a constant complex of synchronization and updates across all the databases. Just imagine all the time to build all the comings and goings! Often it’s not even clear what the original, real value is or where it all began. Of course, this is done through such a gallimaufry of tools and custom coding that the information they convey is old by the time it is used. And everyone knows the greater the gallimaufry rating, the higher the tech debt accumulation. Bad stuff.

With this new Data Virtualization approach, I get to stay right where I am, and whenever called upon, I send a fresh virtual ME instantly. I say “virtual” because there’s no copy made of me, only a ghost of me is transmitted, aligned with other data, usually to a browser or to feed some analytics algorithm. Here’s where my psychiatrist has to get involved, because I have this existential dilemma. Does the ghost that’s passed forward actually exist? Does a datum exist if it’s passed virtually? Well, at least it can’t be stolen like copies of data can. I stay up nights worrying about my colleagues who have to be physically copied, or moved altogether, sometimes getting a little beat up, to some cloud in order to be used by a SaaS application. I never have to move because my ghost is passed directly to the cloud when it’s needed, never taking residence there. And one of the really cool things is that if a user of the browser decides that my value needs to be updated, they change it in the browser and send the ghost back to me with the new value, assuming they have permissions to do that.

Sometimes when I’m needed virtually, there are too many calls for ghosts, and the phantom of the opera sings one note too high and crashes my host software system. Fortunately, the masters at Stone Bond, with Enterprise Enabler®, not only have the speediest development environment on the planet, they also have a way of caching my ghost along with others in memory since I really don’t change that often. In the background, the cache gathers fresh ghosts whenever needed. When my call comes in, my ghost flies from the cache of ghosts and finishes the journey, combined with some other ghosts coming live from other systems. The phantom just hums along and the host system lives happily ever after.

Do you want to have more information? Ask for the whitepaper “Creating an Agile Data Integration Platform using Data Virtualization”, www.quant-ict.nl, glenda@quant-ict.nl, tel:+31 880882500

Source Stonebond Enterprise, Pamela Zsabo

Quant ICT Group wishes you happy holidays and a prosperous 2017…..
                             file 40 KERSTBOOM-streep-melkglas

Databases Need Continuous Monitoring & Proper Data Stewardship

file 39 security-services
While perimeter, cloud and mobile security tend to grab the headlines, in reality it’s the database repositories and the private financial information stored in databases that are the actual targets of most breaches. Comprehensive database security is commonly an overlooked area within financial services organizations, yet one of the most critical.

Databases pose a unique security challenge for banks and financial institutions of all sizes. The database infrastructure at financial services companies is usually quite extensive with many databases remaining unknown, unmonitored, or simply left completely unmanaged and worse, unsecured. It’s a common mistake for financial services organizations to have limited visibility into their database infrastructure, providing an open avenue for cyberattackers. Once inside the database infrastructure, an attacker can easily operate strategically and remain undetected, stealing records, compromising credentials and installing malware over many months.

In fact, according to KPMG’s 2016 Banking Outlook Survey published earlier this year, approximately 47% of banking EVPs and managing directors, as well as 72% of SVPs, reported they do not have insight into whether their institution’s security has been compromised by a cyberattack over the past two years. These numbers are alarming and point to a critical need for securing and monitoring databases. Any attack that reaches the core networks can put the financial institution databases and private information at extreme risk.

With breaches increasing at an alarming rate, it’s important for financial organizations to follow thorough data stewardship practices and continuously monitor all of their databases – from their initial deployment, throughout their lifecycle and into their retirement when the database is decommissioned. Monitoring needs to be detailed down to the table level to completely understand the database security profile, data ownership, purpose of the data and any changes to the data stores. Without an in-depth understanding of every database and detailed knowledge of the private data residing in databases throughout the network, it is impossible to keep data secure and prevent a serious breach. IT security personnel need to put the proper tools, policies and procedures in place.

The process starts with a comprehensive assessment of the database infrastructure. It is recommended to use non-intrusive monitoring tools to identify every database on the network and every application or user that is accessing them. Further, the database’s business purpose needs to be documented, the nature and sensitivity of the data stored in the databases determined, and proper retention policies established. It is also important to know what will be done with each database when its retention time has expired. Zombie databases that should have been decommissioned long ago are an open opportunity for attack because the database may not be properly patched, credentials may not have been updated and no one is actively monitoring the database activity.

Once policies are established and the verification of all databases is complete, financial organizations should then continuously monitor these databases throughout their lifecycle to ensure policies and procedures are updated and effectively enforced. The key to stopping serious data breaches is paying specific attention to who is using or accessing a database, how it’s being used and identifying key changes in use patterns. Identification of an unknown user or uncommon usage pattern may be a sign that there’s a malicious attacker on the network.

Zombie databases are particularly vulnerable to insider threats, advanced persistent threats and compromised credentials. Attackers can use them as an open door to get access to other databases and potentially private financial information across the network.

In a similar fashion, rogue databases can present a large and very high-risk attack surface as well. These one-off databases may have been commissioned during the development phase of a new application and connected to the network without the IT team being aware of their existence. While developers may think they are doing something innocuous, without IT going through the proper lifecycle steps, the data won’t be properly protected. Private data on these rogue databases resides outside the scope of the security team, leaving the organization highly vulnerable. Without intelligent monitoring to identify when a new database is active on the network and to check the database against current data asset inventory, it’s not possible to properly secure its data.

With so much attention focused on securing the perimeter, mobile devices and the cloud, financial services IT teams risk ignoring the security of their organizations’ crown jewels – all of the databases residing on their network. In order to prevent a serious data breach, every database needs to be identified, inventoried, continuously monitored and retired if not in use. It’s extremely critical for the protection of sensitive information for IT teams to be aware of who is accessing a database, what each database is used for, and to ensure data is protected for the lifetime of the database. Without a comprehensive database monitoring model in place, financial institutions run the risk of a serious breach of information and becoming front page news.

For more information on database monitoring Quant ICT Group, www.quant-ict.nl, glenda@quant-ict.nl,  tel:+31 880882500

Source: Blog Steve Hunt, Credit union Times

Do you have the guts to be a hero?

file 38 project-manager-superhero

Take the Agile Integration Plunge

Come on now. This day and age, you business leaders are still beholden to your IT organization. You are the cleverest business person you know. You have successfully negotiated the biggest acquisition in the history of your company. Besides that, you are on the leading edge with all the latest innovations in video, phone, and personal computer and tablet technology. You even rigged up a sensor to notify you when the bird feeder in your yard is empty. How can you not be frustrated that you just can’t seem to feel as confident about your IT infrastructure?

There are a number of reasons why some corporate IT tools and infrastructure have lagged generations behind the advances of consumer technologies, but that’s for another blog. The important message here is that finally, the next generation integration platforms have matured and are ready to turn the ship. Change has been incredibly difficult, and large companies, in particular, have been unable to respond quickly to opportunities because IT could not keep up.  We have reached a point where those that adopt agile integration software will have a clear competitive advantage. We are seeing that transformation take place, and not at the speed of classic IT. At the speed of change.

I heard at a recent Gartner conference, the keynote speaker announcing that with the new imperatives of agility and supporting the “Nexus” of data integration demands, the Big Players will be DOA (“Dead on Arrival”). You must take charge and get on board with the next generation, even though you are faced with resistance from people whose IT knowledge may intimidate you. How can you tell if it’s time to take up agility?

file 38

  1. Not everyone is working off of the same numbers
  2. Long wait times to get access to new data that you need
  3. Manpower costs for building integration are exorbitant
  4. More than 40% of the costs of new projects are for integration and data access
  5. Data you get is not up-to-the-minute
  6. You have business processes that are highly manpower-intensive
  7. Your partners and customers are not getting information as fast and in the forms they would like
  8. You may be moving forward with Cloud, Big Data, others, but the rest of the IT team can’t keep up with the demands of everyday project work

One tell-tale sign is that the data warehouse is the center of the universe, but is less than agile. Data warehouse is good for historic data, but not for real-time or close to live data. It’s taking an extra step in the storage process that requires the data to be staged when it’s needed.   How can you get the ship turning? Institute just a couple of new guidelines: “All new integration must: 1. Leverage Agile Integration Software 2. Use Virtual Data Federation wherever data needs to be combined across more than one data source 3. Use Data Virtualization for all “on-demand” integration (proactively asking for data when you need it so you can get live data from the sources as opposed to getting stale data) You will be met with resistance from all sides, but you will gain strong supporters quickly after the first jack-rabbit projects come in ahead of time and below budget. There are plenty of causes of resistance:

  • The Big Vendors are always a safe choice
  • Fear of change; stubbornness; laziness about learning new things
  • There is a relationship between how much time it takes for consultants to do something and the amount of money they earn
  • Anything new is up for more scrutiny than going with the tried and slow

And you can certainly augment the list in the context of your business. The promises of Agile Integration Software like Enterprise Enabler are real and are being realized in many companies. The technology is definitely Enterprise-Ready, and not to be relegated to small projects with small potential benefits. Do yourself and your company a favor. Take the plunge. Stare down fear with the guts that got you here in the first place.

Do you want to have more information? Ask for the whitepaper “Creating an Agile Data Integration Platform using Data Virtualization”, Quant ICT Group, www.quant-ict.nl, glenda@quant-ict.nl, tel:+31880882500

Source: Pamela Szabo, Stonebond Technologies

Database Performance Optimization With Wait Time Analysis

file 37 wachten
Database performance tuning is a complex but extremely important task. However, it can be difficult to effectively optimize databases when there are other “fires” to put out, limited resources, and an increasing number of databases to look after. But that doesn’t mean it’s impossible, especially with the right approach.

Wait Time Analysis
Consider for a moment how traditional database monitoring focuses on resource utilization metrics. To illustrate why this isn’t ideal, think about what you would evaluate in trying to shorten your commute to work. What would you measure: The number of tire rotations per second? The engine’s temperature? How much gas is in the tank? These resource statistics would be useless in the context of your goal.

All that would really matter is what has an impact on your trip’s time: detailed insight into how long you spend at each stoplight and which stretches of road have the most stop-and-go traffic. With this information, you could determine if finding a shorter route, avoiding peak hours, taking the expressway, or driving faster would improve your commute.

This is the basis for wait time analysis, an innovative approach to database optimization that focuses on time. Specifically, it allows DBAs to make tuning decisions based on the impact to application response. In turn, this enables IT to always find the root cause of the most important problem impacting end users, and identify which critical resource(s) will resolve it.

Wait Time Analysis for Database Optimization
Wait time analysis helps you understand how much time each SQL statement is spending across all its executions in a given period of time. Keep in mind, you may have a very fast SQL that runs in 100 milliseconds, but if it has to run a million times a day, it will have a big impact on application performance.

It begins with identifying the discrete steps accumulating time. These steps, corresponding to physical I/O operations, manipulating buffers, waiting on locks, and all other minute database operations, are instrumented by most database vendors.

For Microsoft SQL Server, these are called “Wait Types.” For Oracle, SAP Sybase ASE, and IBM DB2, they are referred to as “Wait Events.” These indicate the amount of time spent in each step while sessions wait for each database resource. If they can be accurately monitored and analyzed, the exact bottlenecks and queries causing the delays can be determined.

This all results in response time, defined as the sum of actual processing time plus the time a session spends waiting on availability of resources such as a lock, log file, or hundreds of other wait types/events. When multiple sessions compete for the same processing resources, the wait time becomes the most significant component of the actual response time.

Identifying Bottlenecks with Wait Time Analytics

Armed with wait time information for a database, you can identify the biggest contributor to slow performance and focus on fixing it—whether it is writing to disk, performing a slow query, execution plans changes, or waiting for memory.

You can also use this information to identify trends and predict performance of all SQL statements being executed over time and thereby be proactive in helping applications run better. It can help identify the performance impact of changes to application code, software configuration, or hardware resources.

It is the visibility into how exactly these changes impact performance that makes wait time analysis a powerful tool for resource planning. Organizations have invested in faster hardware and flash storage, expecting it to solve performance problems, only to discover it didn’t. The problem could have been caused by bad SQL or locking and blocking, while storage I/O actually had low contribution to performance.

With the right visibility into how much time an application is waiting for disk read/write operations, it’s easier to predict the performance impact to expect from a storage system with higher IOPS. This is called “performance certainty”: There is no guessing if or how much performance will be improved. You know ahead of time how switching to a different server, VM, or storage system will impact performance and what bottlenecks to work on after the move.

Performance optimization ranks at the top of the list for DBAs, both in terms of importance and percentage of time required. Understanding wait time analysis as an approach to optimization and tuning, especially if aided with the right analysis tools, can make a significant impact in your tuning effectiveness, in application response time, and in your career.

For more information contact Quant ICT Group, www.quant-ict.nl, glenda@quant-ict.nl, tel: +31 880882500

Source: http://www.dbta.com, Gerardo Dada

Gartner’s First Data Virtualization Market Guide

file 36 idee
Data virtualization offers data and analytics leaders a data integration strategy to limit data silos and address new analytical and operational opportunities through flexibility in data access. This research provides insight into 16 vendors and their offerings to help in making an informed choice.

Key Findings

  • Familiar data integration patterns centered on physical data movement (bulk/batch data movement, for example) are no longer a sufficient solution for enabling a digital business.
  • Many organizations report that their existing data warehouse forms yet another data silo in the organization, which leads to a slow degradation of the data warehouse benefits of providing optimized, integrated data delivery.
  • Data virtualization offers an attractive alternative to bulk/batch data delivery by allowing flexibility and agility in data access through the creation of a single logical view of data from varied data silos (including transactional systems, RDBMSs, cloud data stores and big data stores).
  • Data virtualization offerings are maturing at a steady pace in terms of connectivity options, performance, security and near-real-time data delivery. Current offerings are being increasingly deployed by all major verticals for production-level use cases.

Receive an overview of the Data Virtualization marketplace and it’s future growth, Operation and Analytical use cases and the list of top data virtualization vendors in the market including Stone Bond technologies.

Contact Quant ICT Group and ask for “Market Guide for Data Virtualization”, www.quant-ict.nl, glenda@quant-ict.nl  tel.:+31880882500

The Heavy Cost of System Downtime

file 35 downtime
Human error, network failures, buggy apps and the ever-present hacker have made system outages a constant concern for far too many organizations.

IT system outages have emerged as fairly routine issues for companies today—and the resulting downtime amounts to a five-figure financial hit every day, according to recent research from CloudEndure.

The resulting “2016 Disaster Recovery Survey” report reveals that while the majority of IT professionals say they’ve set service availability goals of 99.9% (a.k.a., the industry standard “three nines” mark), far fewer say they’re capable of achieving this “most of the time.” As for the culprits? Either human error or network failures are usually to blame, not to mention app bugs, storage failures and (of course) the ever-troublesome hacker.

Disaster recovery solutions would help. However, only a minority of businesses use disaster recovery for the majority of their servers. (For the purposes of the report, downtime is defined by survey participants as moments when either a system is not accessible, or that it’s accessible but highly degraded and/or not operational for certain functions.) More than 140 global IT pros took part in the research.

Results:

  • Routine Stop
    57% of IT pros say their company has had at least one systems outage in the past three months, and 31% say they’ve had an outage either in the past week or month.
  • Delayed Response
    39% have set recovery time objectives (RTOs) at more than 30 minutes, and an additional 6% haven’t even established any RTOs.
  • Budget Burden
    73% say downtime costs their organization more than $10,000 a day.
  • Top Risks to System Availability
    * Human error: 22%,
    * Network failure: 20%,
    * App bugs: 15%,
    * Storage failures: 11%,
    * External threats, such as a hack: 11%
  • High Bar
    77% say their service availability goal is 99.9% (“three nines”) or better, or less than nine hours of downtime a year.
  • Room for Improvement
    52% say their company meets availability goals “most of the time” and 38% say they do so “consistently.”
  • Under Informed, Part I
    22% say their organization doesn’t measure service availability at all.
  • Under Informed, Part II
    Just 40% notify customers about a service availability event when it occurs
  • Over Exposed
    Only 45% say their company uses disaster recovery for more than half of their servers.
  • Weather Report
    54% say they are targeting public cloud sources for disaster recovery platforms, and 35% are looking at private cloud sources.

No More System Downtime? Contact Quant ICT Group, www.quant-ict.nl, glenda@quant-ict.nl  tel.:+31880882500

Source: www.cioinsight.com

Unlock the power of your data

file 34 lock
“support your modern BI investment with fast agile data virtualization”

The need for more data at your fingertips is ever increasing. By making the data more agile and available faster, we ensure modern Business Intelligence (BI) tools, such as Qlik, Tableau, Power BI and Spotfire, can uncover actionable business insights by connecting all your data sources with one powerful platform.

Modern Business Intelligence
One of the biggest challenges is the ever increasing number and variety of data sources, which require federation, cleansing and processing for BI tool consumption. Historically, this has been achieved with the traditional methods of utilizing Data Warehouses, Data Lakes and Data Marts and then physically moving the data into the BI systems.

These approaches are brittle, tedious to maintain, and very costly. Worse, often the data that is being analyzed is stale and of poor quality.

With Enterprise Enabler® you are able connect to all your sources, federate them, apply business rules and present them as one single data source to your existing BI tool, all in real-time. This is done virtually via a logical data model or using data virtualization, allowing the modern BI tool to access the data just like it would a database.

This approach has helped our customers to discover new insights and relationships within their data they had never recognized before, translating into millions of dollars of ROI.

Actionable BI
Now, with these insights, Enterprise Enabler allows you to take real-time action with fully secure bi-directional write-back. These write back tasks can be automated or done manually using Enterprise Enabler’s Process Designer.

A single log on to your BI dashboard now becomes the facilitator of agile business decisions.

Enterprise Enabler® has been featured in Gartner’s latest Market Guide for Data Virtualization.

Do you want to have more information? www.quant-ict.nl, glenda@quant-ict.nl
Ask for :
* Gartner Market Guide for Data Virtualization
* Whitepaper “Creating an Agile Data Integration Platform using Data Virtualization”

 

Daarom Agile Data Integratie

file 33 daarom
Veel bedrijven laten hun vertrouwde werkwijze en visie op integratiesoftware en -oplossingen varen. De oude generatie oplossingen zijn gebaseerd op een complex geheel van integratielagen die met elke nieuwe functionaliteit complexer en kostbaarder wordt. De vraagstelling verschuift. Organisaties moeten niet meer kijken naar de functionaliteit van hun huidige integratie oplossing. Zij moeten zich afvragen wat daadwerkelijk nodig is om betrouwbare informatie op een slimme en snelle manier aan te leveren, zonder te hoeven denken in restricties.

Agile Data Integratie is een flexibele en schaalbare architectuur voor data-integratie en -federatie. Hierbij wordt één data virtualisatie laag gemaakt die de complexiteit van het openen van onderliggende databronnen verbergt en afhandelt. Analisten verkrijgen de data die ze nodig hebben en houden controle over het proces. De IT-afdeling kan agile data integratie diensten implementeren die direct voor alle applicaties kunnen worden gebruikt.

Enterprise Enabler is het enige product op de markt dat aan alle criteria voldoet om het predicaat “True Agile Integration Software (AIS)” te dragen.

Voordelen:

•Data wordt tot vijf keer sneller aangeleverd.
•Tot 70% besparing op onderhoud en ontwikkelingskosten
•Hogere productiviteit
•Minder risico’s bij het implementeren en het leveren van data

Quant ICT Group biedt een geïntegreerde Integrated Development Environment (IDE) die de volledige behoefte aan Agile data integratie afdekt. Deze IDE voorziet in alle ‘run-time engines’ (services) ten behoeve van design, ontwikkeling, test, deployment en beheer in één omgeving, met de mogelijkheid om alles aan alles te koppelen.

Vraag nu het gratis solution paper bij ons aan “Agile data integratie:van (big)data naar real time informatie” of neem contact met ons op. Quant ICT Group, www.quant-ict.nl, email glenda@quant-ict.nl, tel:088-0882500

Does your Database need a Health Check?

file 32 healthcare
A Database Health Check is a complete and comprehensive review of your databases and is geared towards maximizing database performance at a low cost to your business. A Health Check will not only save you money, but will also identify problems in efficiency among your databases. Our solutions architects will work directly with your DBA team to identify database performance issues within your systems.

The Dirty “Little” Secrets of Legacy Integration

file 31
The more I learn about integration products on the market, the more astounded I become.  Fortune XXX companies buy politically “safe” products from IBM, Informatica, SAP, and Oracle and launch into tens of millions of dollars’ worth of services to implement them. Egad!   They’d be better off with holes in their collective heads!

1 2 3 4 5 6