Alex Woodie, Datanami Managing Editor, and Doug Black, EnterpriseTech Managing Editor gives brief overview of the conference, its goals, and resources that attendees have for gaining additional insight into the complex and evolving world where enterprises increasingly leverage High Performance Computing (HPC) to solve modern scaling challenges of the big data era.
Shklyar’s talk will take a close look at HPC environments across industries to identify key differences as well as areas where all advanced-scale infrastructures stand on common ground. From there, Shklyar will zero in on the common denominators and how to approach issues others have already solved, from software to storage.
Identifying Needs, Technology and Talent to Deliver on the ROI Promise of Big Data The challenge for organizations has been the same for decades: how to get ahead of the competition. Today organizations are looking at the data for a competitive advantage and are trying to capitalize on their ability to monetize big data. Key problems are keeping companies from finding consistency in their returns, including a shortage of skilled practitioners, technology that is still iterating, and a big data fog of war that can prevent companies from identifying needs. Getting through these challenges requires a roadmap. We’ll examine winning strategies to show how leaders are moving the return on investment needle for consistent returns.
An overview of the journey highlighting: – What are autonomous vehicles? Which technologies make autonomous vehicles work? – What are the stages to journey to fully autonomous vehicles? – How machine learning, deep learning, Big Data Analytics and other emerging technologies and architectural approaches are enabling journey autonomous vehicles? – Potential roadblocks to this journey – Benefits and impact to business models across multiple industries. An interactive presentation with real world case studies showing how the autonomous vehicles really work.
IDC’s HPC team recently completed an in-depth survey with 63 leading cyber security experts from the US and Europe that explored a number of key cyber security issues including the use of big data analysis as a cyber security tool, cyber security team fit within an organization, cyber security frameworks, key performance indicators, breach plan development, and overall insights on cyber security best practices. This talk will provide a summary of that project with a special emphasis on the status and prospects of future cyber security efforts particular to the HPC sector.
Today’s enterprise architectures can often be composed of a myriad of heterogeneous devices. Bring-Your-Own-Device policies, vendor diversification, and the transition to the cloud all contribute to a sprawling infrastructure whose complexity and scale can only be addressed by using modern distributed data processing systems. In this session, we describe the system that Capital One has built to collect, clean, and analyze the security-related events occurring within its digital infrastructure. Raw data from each component is collected and pre-processed using Apache NiFi flows. This raw data is then written into an Apache Kafka cluster, which serves as the primary communications backbone of our platform. The raw data is then parsed, cleaned, and enriched in realtime via Apache Metron and Apache Storm. This refined data is then ingested into ElasticSearch, allowing operations teams to detect and monitor events as they occur. The refined data is also transformed into the Apache ORC data format, and stored in Amazon S3, allowing data scientists to perform long-term, batch-based analysis. We discuss the challenges involved with architecting and implementing this system include issues surrounding data quality, performance tuning, and the impact of additional financial regulations relating to data governance. Finally, we describe the result of our efforts and the value that our data platform brings to Capital One as a business.