Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Graph and hardware-specific optimisations lead to orders of magnitude improvements in performance, energy, and cost over conventional graph processing methods. Typical big data platforms, such as Apache MapReduce and Apache Spark, rely on generic ...
In this paper we present a case addressing the drawbacks of financial market data, its limited volumes, history, and sometimes the incomplete and erroneous datasets with variable quality, limited availability, and price barriers. The case aims to enable ...
Our society is increasingly digital, and its processes are increasingly digitalized. As an emerging technology for the digital society, graphs provide a universal abstraction to represent concepts and objects, and the relationships between them. However,...
Datacenters are the backbone of our digital society, used by the industry, academic researchers, public institutions, etc. To manage resources, data centers make use of sophisticated schedulers. Each scheduler offers a different set of capabilities and ...
The performance of distributed applications implemented using microservice architecture depends heavily on the configuration of various parameters, which are hard to tune due to large configuration search space and inter-dependence of parameters. While ...
Microservices is a cloud-native architecture in which a single application is implemented as a collection of small, independent, and loosely-coupled services. This architecture is gaining popularity in the industry as it promises to make applications ...
Performability is the classic metric for performance evaluation of static systems in case of failures. Compared to static systems, Self-Adaptive Systems (SASs) are inherently more complex due to their constantly changing nature. Thus software architects ...
Systematic testing of software performance during development is a persistent challenge, made increasingly important by the magnifying effect of mass software deployment on any savings. In practice, such systematic performance evaluation requires a ...
In this paper we report our experiences from the migration of an AI model inference process, used in the context of an E-health platform to the Function as a Service model. To that direction, a performance analysis is applied, across three available ...
It is our great pleasure to welcome you to the 2023 ACM Practically FAIR - PFAIR 2023. This workshop builds upon the popular FAIR data principles to investigate and share best practices for adopting FAIR principles in practice. The FAIR proposal only ...
This paper proposes an auto-profiling tool for OSCAR, an open-source platform able to support serverless computing in cloud and edge environments. The tool, named OSCAR-P, is designed to automatically test a specified application workflow on different ...
We are pleased to welcome you to the 2023 ACM Workshop on Artificial Intelligence for Performance Modeling, Prediction, and Control - AIPerf'23.
In its first edition, AIPerf intends to foster the usage of AI (such as probabilistic methods, machine ...
The infrastructure-as-code (IaC) is an approach for automating the deployment, maintenance, and monitoring of environments for online services and applications that developers usually do manually. The benefit is not only reducing the time and effort but ...
As the next generation of diverse workloads like autonomous driving and augmented/virtual reality evolves, computation is shifting from cloud-based services to the edge, leading to the emergence of a cloud-edge compute continuum. This continuum promises ...
Survival analysis studies time-modeling techniques for an event of interest occurring for a population. Survival analysis found widespread applications in healthcare, engineering, and social sciences. However, the data needed to train survival models ...
The ability to split applications across different locations in the continuum (edge/cloud) creates needs for application break down into smaller and more distributed chunks. In this realm the Function as a Service approach appears as a significant ...
Serverless computing and, in particular, Function as a Service (FaaS) has introduced novel computational approaches with its highly-elastic capabilities, per-millisecond billing and scale-to-zero capacities, thus being of interest for the computing ...
In this paper, we present PerfoRT, a tool to ease software performance regression measurement of Java systems. Its main characteristics include: minimal configuration to ease automation and hide complexity to the end user; a broad scope of performance ...
The advance in digital twin technology is creating value for lots of companies. We look at the digital twin design and operation from a sustainability perspective. We identify some challenges related to a digital twin's sustainable design and operation. ...
The ambition of this talk is to seed discussions around how cloud native technologies can help research on performance engineering, but also what are the interesting performance engineering challenges to solve with cloud native technologies.
Cloud ...