Data Virtualization: Technology and Use Cases
Breakthrough solutions for agile data integration
17 November 2022 (9-17h CET)
Location: Live Online Event
(@YOUR DIGITAL WORKPLACE)
Presented in English
by Rick van der Lans
Price: 590 EUR
(excl. 21% VAT)
This event is history,
please check out the List of Upcoming Seminars, or send us an email
Check out these related open workshops:
Check out our related in-house workshops:
What you will learn ?
This seminar provides you with answers to the following questions:
- How do you use Data Virtualization to integrate data in a more agile way? (cfr. chapter 1)
- What are the advantages of agile data integration?
- How can you embed Data Virtualization in Business Intelligence systems?
- Can Data Virtualization be used for integrating on-premise and Cloud applications?
- How to migrate to a more agile data integration system
- How do popular Data Virtualization solutions (Tibco, Denodo, DataVirtuality, Fraxses, ...) work? (cfr. chapter 2)
- How to avoid well-known pitfalls?
- How can we solve the performance conundrum? (cfr. chapter 3)
- What can we learn from real-life experiences with Data Virtualization? (cfr. chapters 4, 5 and 6)
Why do we organise this seminar ?
This seminar focuses on Data Virtualization and its breakthrough solutions for agile data integration. Our speaker Rick van der Lans explains the technology, discusses advantages and disadvantages, compares data virtualization products, and presents several use cases.
But why do we need a new technology? Data is increasingly becoming a crucial asset for organisations to survive in today’s fast moving business world. In addition, data becomes more valuable if enriched and/or fused with other data. Unfortunately, enterprise data is dispersed by most organisations over numerous systems all using different technologies. To bring all that data together is and has always been a major technological challenge.
In addition, more and more data is available outside the traditional enterprise systems. It's stored in big data platforms, in cloud applications, spreadsheets, simple file systems, in weblogs, in social media systems, ..., and stored in traditional databases. For each system that requires data from several systems, different integration solutions are deployed. In other words, integration silos have been developed that over time have led to a complex integration labyrinth. The disadvantages are clear:
- Inconsistent integration specifications
- Inconsistent results
- Decreased time to market
- Increased development costs
- Increased maintenance costs
The bar for integration tools and technology has been raised: the integration labyrinth has to disappear. It must become easier to integrate data from multiple systems, and integration solutions should be easier to design and maintain to keep up with the fast changing business world.
All these new demands are changing the rules of the integration game, they demand that integration solutions are developed in a more agile way. One of the technologies making this possible today is Data Virtualization.
Who should attend this seminar ?
This seminar is aimed at everyone who needs to have both an overview and a deep dive in data virtualization and agile data integration, hence it is geared towards:
- BI and datawarehousing consultants,
- datawarehouse and database developers,
- database specialists and managers,
- technology planners,
- BI project managers,
- information analysts and system analysts
8.45h - 9.00h
Registration and welcome of the participants (online)
1. Introduction to Data Virtualization
- What is data virtualization ?
- Use case of data virtualization: business intelligence, data science, democratizing of data, master data management, distributed data
- Differences between data abstraction, data federation, and data integration
- Open versus closed data virtualization servers
- Market overview: AtScale, Data Virtuality, Denodo Platform, Intenda Fraxses, IBM Data Virtualization Manager for z/OS, Stone Bond Enterprise Enabler, TIBCO Data Virtualization, and Zetaris
How Do Data Virtualization Servers Work ?
- The key building block: the virtual table
- Integrating data sources via virtual tables
- Implementing transformation rules in virtual tables
- Stacking virtual tables
- Impact analysis and lineage
- Running transactions – updating data
- Securing access to data in virtual tables
- Importing non-relational data, such as XML and JSON documents, web services, NoSQL, and Hadoop data
- The importance of an integrated business glossary and centralization of metadata specifications
Performance Improving Features
- Caching of a virtual table to improve query performance, creating consistent report results, or minimize interference on source systems
- Different styles of refreshing caches: full, incremental, live, online and offline refreshing
- Different query optimization techniques, including query substitution, pushdown, query expansion, ship joins, sort-merge Joins, statistical data and SQL override
Use Case 1: The Logical Data Warehouse Architecture
- The limitations of the classic data warehouse architecture
- On-demand versus scheduled integration and transformation
- Making a BI system more agile with data virtualization
- The advantages of virtual data marts
- Strategies for adopting data virtualization
- Application areas of data virtualization
- The need for powerful analytical database servers
- Migrating to a data virtualization-based BI system
Use Case 2: From the Physical Data Lake to the Logical Data Lake
- Practical limitations of developing one physical data lake
- Shortening the data preparation phase of data science with data virtualization
- Sharing metadata specifications between data scientists
- Implementing analytical models inside a data virtualization server
Use Case 3: Democratizing Enterprise Data
- Increasing the business value of data assets by making all the data available to a larger group of users within the organisation
- The business value of consistent data integration
- Using lean data integration to make data available for analytics and reporting faster
- One consistent data view for the entire organisation
- How the business glossary and search features help business users
- The coming of the data marketplace
- The Future of Data Virtualization
- Data virtualization as driving force for data integration
- Potential new product features
Questions, summary and conclusions
End of this one-day seminar
Rick van der Lans is a highly-respected independent analyst, consultant, author, and internationally acclaimed lecturer specializing in data architectures, data warehousing, business intelligence, big data, and database technology. In 2018 he was selected the sixth most influential BI analyst worldwide by onalytica.com.
He has presented countless seminars, webinars, and keynotes at industry-leading conferences. For many years, he has served as the chairman of annual Data Warehousing and Business Intelligence Summit in The Netherlands.
Rick helps clients worldwide to design their data warehouse, big data, and business intelligence architectures and solutions and assists them with selecting the right products. He has been influential in introducing the new logical data warehouse architecture worldwide which helps organisations to develop more agile business intelligence systems.
Over the years, Rick has written hundreds of articles and blogs for newspapers and websites and has authored many educational and popular white papers for a long list of vendors. He was the author of the first available book on SQL, entitled including Introduction to SQL, which has been translated into several languages with more than 100,000 copies sold. Recently published books are Data Virtualization for Business Intelligence Systems and Data Virtualization: Selected Writings.
He presents seminars, keynotes, and in-house sessions on data architectures, big data and analytics, data virtualization, the logical data warehouse, data warehousing and business intelligence.
Questions about this ? Interested but you can't attend ? Send us an email !