Keynote Speakers
Dr. Remzi Arpaci-Dusseau
A Separate Piece: On The Utility of Key/Value Separation
In this talk, I will discuss an age-old systems technique—key/value separation—and show its utility in modern storage systems. I will start with a historical perspective, and then present three systems we have built that greatly benefit from judicious application of this approach, including a high-performance key-value storage system, a fast parallel sort on modern hardware, and a scalable distributed database.
Bio
I am the Vilas Distinguished Achievement Professor and Grace Wahba Professor in the Computer Sciences Department at UW-Madison. I co-lead a group with Professor Andrea Arpaci-Dusseau. Together, we have graduated 28 Ph.D. students, won numerous best-paper awards, and one test-of-time award; many of our innovations are used by commercial systems. I received the ACM-SIGOPS Weiser award for “outstanding leadership, innovation, and impact in storage and computer systems research”, was named an ACM Fellow for “contributions to storage and computer systems”, and an AAAS Fellow for “distinguished contributions to computer systems research and development of computing systems with concomitant devotion to computing education for everyone”. I have won the SACM Professor-of-the-Year award seven times, the Rosner “Excellent Educator” award, and the Chancellor’s Distinguished Teaching Award. Our operating systems book (www.ostep.org) is downloaded millions of times yearly and used at numerous institutions worldwide; it is usually the top-selling book on Amazon in operating system theory.
Dr. Samuel Kounev
Serverless Computing: An Old Wine in New Bottles or More?
Market analysts are agreed that serverless computing has strong market potential, with projected compound annual growth rates varying between 21% and 28% through 2028 and a projected market value of $36.8 billion by that time. Although serverless computing has gained significant attention in industry and academia over the past years, there is still no consensus about its unique distinguishing characteristics and precise understanding of how these characteristics differ from classical cloud computing. For example, there is no wide agreement on whether serverless is solely a set of requirements from the cloud user’s perspective or it should also mandate specific implementation choices on the provider side, such as implementing an autoscaling mechanism to achieve elasticity. Similarly, there is no agreement on whether serverless is just the operational part, or it should also include specific programming models, interfaces, or calling protocols.
In this talk, we seek to dispel this confusion by evaluating the essential conceptual characteristics of serverless computing as a paradigm, while putting the various terms around it into perspective. We examine how the term serverless computing, and related terms, are used today. We explain the historical evolution leading to serverless computing, starting with mainframe virtualization in the 1960 through to Grid and cloud computing all the way up to today. We review existing cloud computing service models, including IaaS, PaaS, SaaS, CaaS, FaaS, and BaaS, discussing how they relate to the serverless paradigm. In the second part of talk, we focus on performance challenges in serverless computing both from the user’s perspective (finding the optimal size of serverless functions) as well as from the provider’s perspective (ensuring predictable and fast container start times coupled with fine-granular and accurate elastic scaling mechanisms).
Bio
Dr. Samuel Kounev received a MSc degree in Mathematics and Computer Science from the University of Sofia (Bulgaria) in 2000 and a PhD (Dr.-Ing. summa cum laude) in computer science from TU Darmstadt (Germany) in 2005. He was a research fellow at the University of Cambridge (2006-2008) and Visiting Professor at UPC Barcelona (summer 2006 and 2007). In 2009, Samuel received the DFG Emmy-Noether-Career-Award (1M€) for excellent young scientists, establishing his research group “Descartes” at Karlsruhe Institute of Technology (KIT). Since 2014, Samuel Kounev is a Full Professor holding the Chair of Software Engineering at the University of Würzburg, where he has served in various roles including Dean (2019-2021) and Vice Dean (2017-2019) of the Faculty of Mathematics and Computer Science, Managing Director of the Institute of Computer Science (2016-2017), and Member of the Faculty Board (2015-2021).
His research interests include developing novel methods, techniques, and tools for the engineering of software for building dependable, efficient, and secure distributed systems, including cloud-based systems, cyber-physical systems, and scientific computing applications. He has recently coauthored the first textbook on Systems Benchmarking (New York, NY, USA: Springer, 2020). In the area of benchmarking, he founded the SPEC Research Group, a consortium within the Standard Performance Evaluation Corporation, providing a platform for collaborative research efforts in the area of quantitative system evaluation and analysis. Samuel is co-founder of several conferences in the field, including the ACM/SPEC International Conference on Performance Engineering (ICPE) and the IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS), for which he has also been serving on the Steering Committees. His research has lead to over 300 publications and multiple scientific and industrial awards, including 7 Best Paper Awards, SPEC Presidential Award for “Excellence in Research“, Google Research Award, ABB Research Award, and VMware Academic Research Award.
Dr. Lavanya Ramakrishnan
Empowering Scientific Discoveries through Innovative Synergy of Workflows, Data, Artificial Intelligence, and Humans
Scientific discoveries increasingly depend on leveraging computation and data synergistically at scale. Scientific workflows provide a construct to manage the computation and data over distributed and large-scale infrastructure and has become cornerstone to enable seamless, interactive, searchable, collaborative, reproducible science. While the emergence of Artificial Intelligence (AI) provides us an opportunity for automation, it is intertwined with complex human processes, policies and decisions that need to be accounted for in scientific work. This talk will detail our work in supporting scientific workflow and data through a dual approach that combines computer science techniques with user research. User research enables us to focus on understanding user behaviors, needs, and motivations to build next-generation scientific software ecosystems. Further, the talk will detail how it is important in the future to take a synergistic approach that brings together workflows, data, artificial intelligence, and humans to further scientific discoveries that are grounded in transparency and trust.
Bio
Dr. Lavanya Ramakrishnan is Senior Scientist and Division Deputy in the Scientific Data Division at Lawrence Berkeley National Lab and Deputy Project Director for the High Performance Data Facility (HPDF). Her research interests are in building software tools for computational and data-intensive science with a focus on workflow, resource, and data management. More recently, her work explores the methods and infrastructure needed to support automation and self-driving labs. In addition, Ramakrishnan established and leads a scientific user research program focusing on studying and enumerating the way that scientists and communities use data and workflows to build usable tools for science. She currently leads several project teams that consist of a mix of social scientists, software engineers, and computer scientists.
Ramakrishnan serves on the High Performance Distributed Computing Steering Committee, iHARP NSF HDR Institute’s Advisory board and has previously served as the Associate Editor for Journal of Parallel and Distributed Computing and as program committee chair for various conferences. She has masters and doctoral degrees in computer science from Indiana University and a bachelor degree in computer engineering from VJTI, University of Mumbai. She joined Berkeley Lab as an Alvarez Fellow. Previously she has worked as a research software engineer at Renaissance Computing Institute and MCNC in North Carolina
Dr. Xian-He Sun
AI & Data: Challenges and Opportunities in Computer System Research
Big data, AI, and other data-driven applications generate massive amounts of data and create new data-discovery demands. These applications have fundamentally transformed the computing landscape, making it increasingly data-centric and data-driven. However, the performance improvement of disk-based storage systems has lagged behind that of computing and memory, resulting in a significant I/O performance gap. Simultaneously, data discovery necessitates new forms of data, further straining existing memory and storage systems. In this talk, we first address the challenges and solutions related to I/O systems for high-performance computing (HPC). We then delve into the difficulties and potential solutions for managing active data in a distributed cloud environment. Lastly, we discuss the challenge of handling metadata—the data that manages other data—leading to a broader conversation about new challenges in representing information in learning and data discovery. To illustrate the state of the art, for HPC I/O systems, we showcase Hermes, an intelligent, multi-tiered, dynamic, and distributed I/O buffering system. Hermes has been released as open source under the widely used HDF5 library. Additionally, we touch upon active data issues through our NSF CSSI ChronoLog project, and finally, we explore enriched metadata for scientific and knowledge insights via introducing our DoE ASCR project, Coeus. We also reflect on the challenges and opportunities presented by the AI and big data era.
Bio
Dr. Xian-He Sun is a University Distinguished Professor, the Ron Hochsprung Endowed Chair of Computer Science, and the director of the Gnosis Research Center for accelerating data-driven discovery at the Illinois Institute of Technology (Illinois Tech). Before joining Illinois Tech, he worked at DoE Ames National Laboratory, at ICASE, NASA Langley Research Center, at Louisiana State University, Baton Rouge, and was an ASEE fellow at Navy Research Laboratories. Dr. Sun is an IEEE fellow and is known for his memory-bounded speedup model, also called Sun-Ni’s Law, for scalable computing. His research interests include high-performance data processing, memory and I/O systems, and performance evaluation and optimization. He has over 300 publications and 6 patents in these areas and is currently leading multiple large software development projects in HPC I/O systems. Dr. Sun is the Editor-in-Chief of the IEEE Transactions on Parallel and Distributed Systems, and a former department chair of the Computer Science Department at Illinois Tech. He received the Golden Core award from IEEE CS society in 2017, the ACM Karsten Schwan Best Paper Award from ACM HPDC in 2019, the Ron Hocksprung Endowed Chair from Illinois Tech in 2020, and the first prize best paper award from ACM/IEEE CCGrid in 2021. More information about Dr. Sun can be found at his web site www.cs.iit.edu/~sun/.