VMAP has been pleased to support the research of Prof. Ayse Coskun from Boston University.
Her research is focused on datacenter energy consumption reaches 3% of total US electricity use today and increases by 15% every year, costing billions of dollars. Close to half of this cost is for cooling. In addition, high power consumption increases temperature, and high temperatures accelerate reliability degradation. A significant portion of the hardware failures during useful life of chips are currently caused by high temperatures. Such failures do not only cause customer dissatisfaction but also delay delivering results because of the need of rerunning the requested tasks and increase the overall energy use.
The VMware Academic Program had a busy year in 2012! We introduced new events, new publications, and added two new members to our team. We began to expand our global efforts as well this past year. VMAP is currently in discussions with our China and India offices on how to extend our programs into those countries, having already sponsored several conferences in China in 2012. In 2013 we will focus our attention on collaborative research in key strategic areas, broadly addressing VMware’s corporate investments in the Software Defined Data Center. Key new areas of interest include storage and intelligent automation. We also continue to support our existing research partners that are addressing performance and security in cloud and virtualized environments. Please read on for our 2012 recap, and a preview of upcoming 2013 events.
Security for Virtualized and Cloud Platforms
VMware® invites university-affiliated researchers worldwide to submit proposals for funding in the general area of security for virtualized and cloud platforms. Our research interests in the field of security are broad, including but not limited to the following topics:
- Homomorphic encryption systems and their applications in cloud environments
- Security isolation in mobile hypervisors
- Covert channels in hypervisors
- Multi-level security isolation guarantees with hypervisors
- Security implications of GPU virtualization
- Virtual machine introspection
- Anomaly detection in virtual desktop environments
- Leveraging virtualization to improve intrusion detection
- Secure cloud computation on untrusted platforms
Initial submissions should be extended abstracts, up to two pages long, describing the research project for which funding is requested, including articulation of its relevance to VMware. After a first review phase we will create a shortlist of proposals that have been selected for further consideration, and invite proposers to submit detailed proposals, including details of milestones, budget, personnel, etc. Initial funding up to $150,000 will be for a period of one year, with options to review and recommit to funding subsequently.
- 16th March — Two page extended abstracts due
- 9th April — Notification of interest from VMware
- 30th April — Full proposals due
- 31st May — Announcement of final acceptance of proposals
Please contact Rita Tavilla, Research Program Manager (rtavilla
@vmware.com) with any questions.
All submissions should be sent to vmap-rfp
2011 was a great year for the VMware Academic Program (VMAP). We executed our second RFP, receiving over 50 proposals and providing funding for three great research projects. We sponsored 15 academic conferences, including a great presence at LISA with a very well attended BoF. Our academic licensing program provided software to students, faculty and staff in over 2400 higher education organizations.
Our objective in 2012 and beyond is to establish VMware as a world-class research partner. We will be expanding many of our existing successful programs, and publicizing our investments in the research community and the benefits they bring.
This year we have awarded Graduate Fellowships for the 2012-2013 academic year to two outstanding PhD students from Cornell University and the University of Michigan. Following on from the positive reception to the VMware special issue of ACM Operating Systems Review (OSR) in 2010, we will launch the VMware Technical Journal at the end of February. We have just released our Spring 2012 RFP, on the topic of security, and are also looking for collaborative opportunities in storage, networking and other areas.
We are sponsoring a broad range of conferences and will have a much stronger presence at key conferences, such as FAST, ASPLOS, and the USENIX Annual Technical Conference. We are looking forward to participating in a new set of conferences outside our traditional focus, starting with EclipseCon 2012 in March, in Reston, VA. We are also launching a year-long competition to find the best student conference presentation and best student paper published in 2012 — send us your nominations by email or social media.
Finally, we are hiring: we need a technologist in Palo Alto to be our research evangelist and create new engagements with university partners. Please contact us with any referrals or questions about the role.
Steve Muir, Director VMAP, smuir
Felicia Jadczak, Project Analyst, fjadczak
VMware is very pleased to award the first two VMware Graduate Fellowships, for the 2012/2013 academic year, to Dongyoon Lee, University of Michigan, and Robert Escriva, Cornell University.
Dongyoon Lee’s research interests are in improving the programmability of parallel computer systems. Unlike sequential processors, a multiprocessor system is not guaranteed to produce the same result even when a program is executed over exactly the same input, which causes significant issues in debugging and fault tolerance. Dongyoon’s research has produced ultra-low overhead software and hardware deterministic replay systems by leveraging complementary strengths of static program analysis, the operating system, and computer architecture.
Dongyoon received the B.S. degree from Seoul National University, Seoul, Korea, in 2004 and the M.S degree from the University of Michigan, Ann Arbor, MI, in 2009. He is now a PhD candidate in the EECS department at the University of Michigan, where he works with Prof. Satish Narayanasamy as a member of the Advanced Computer Architecture Laboratory (ACAL) and Software Systems Laboratory (SSL) research groups.
Robert Escriva’s research focuses on building and understanding large-scale infrastructure services for web applications. His current research project is HyperDex, a high performance searchable key-value store which provides strong guarantees while achieving high performance. HyperDex strategically places objects on servers so that both search and key-based operations contact a small subset of all servers in the system. Unlike typical key-value stores, HyperDex takes into account all attributes of an object when mapping it to servers. HyperDex’s novel object placement and replication strategies support efficient search, providing high throughput and low latency with linear scaling.
Robert graduated in 2010 from Rensselaer Polytechnic Institute with a B.S. in Computer Science. In the fall of 2010, he joined the Ph.D. program in Computer Science at Cornell University where he works in Emin Gun Sirer’s research group.
Below you will find instructions for a programming exercise providing a demonstration of some of the concepts from the IAP talks. The exercise is based around a deprivileged hypervisor demonstrator that runs as an Android application, which you can either run on your Android smartphone or the SDK emulator.
IAP attendees who complete this exercise by February 5th 2012 will be entered in a draw to win an iPad. Details on submitting the completed exercise are provided below.
Please direct any questions or clarifications to htuch [at] vmware [dot] com.
The exercise requires an environment capable of executing the Android SDK and NDK, Linux is recommended. In such an environment, follow the instructions for downloading and installing the:
- Android SDK. When selecting components to install via the android tool, please ensure that the SDK platform for Android 2.3.3 is included, along with the Android SDK tools and platform tools. The installation root will be referred to as
- Android NDK. The installation root will be referred to as
ANDROID_SDK_ROOT/platform-tools in your path.
You will also require a copy of the ARMv7-AR Architecture Reference Manual (ARM ARM), available by registering at infocenter.arm.com.
Download and unpack the IAP starter kit. The installation root will be referred to as
IAP_ROOT below. Edit the supplied
Makefile and set
The IAP starter kit contains an Android application in
IAP_ROOT/IAPMobile. This application includes simplified examples of a deprivileged hypervisor, guest kernel and user threads. A frontend displaying the output from the hypervisor is also included.
Most of the code of interest is located in the
IAP_ROOT/IAPMobile/jni directory. In particular, the hypervisor implementation is in
vmm.c, guest kernel in
kernel.c and guest user in
The hypervisor demonstrator provided in
vmm.c is highly simplified. It provides only system call interception and no support for memory or MMU virtualization, interrupt handling, undefined instruction exceptions, etc. Nonetheless, it is sufficient to execute a paravirtualized multi-threaded single address space guest,
Some code segments in the trap handling path in
vmm.c have been omitted, the programming tasks below require that you provide implementations for the missing code.
You will need to familiarize yourself with various parts of
vmm.c to complete the below exercise. Excellent sources of information are the ARM ARM (when understanding the intendended ISA semantics being implemented) and man pages (when understanding system calls used by the VMM). The slides from the IAP talks are also available for reference:
- CPU and memory virtualization
- ARM core virtualization
- Mobile I/O virtualization
- Application-level virtualization
Guest Context Switching
You can build the application
.apk by typing
make apk in
IAP_ROOT. After this, install
bin/IAPMobile-debug.apk on your phone or the emulator, e.g. with the emulator
adb -s emulator-5554 install -r bin/IAPMobile-debug.apk. When first launched, the application should produce the following:
The first task is to implement
vmm.c. This routine is responsible for taking the VCPU (virtual CPU) register file being maintained by the monitor and copying the contents to a
struct pt_regs suitable for setting the guest registers with
LoadGuestRegisters should work in the opposite direction to the already implemented
SaveGuestRegisters. It should copy
r0-r12 directly across, and obtain
r13-r14 from the appropriate banked registers, depending on mode. Finally, only the bits from the
CPSR that are not shadow-only,
~ARM_PSR_MONITOR_MASK, should be placed in
uregs, with user mode set.
When successfully completed, the following should be observed:
Two of the hypercall handling implementations are missing in the
HandleHypercall function –
HYPERCALL_MSR_CPSR_C. These hypercalls are used in guest paravirtualization, since they correspond to non-privileged sensitive instructions that can’t be trapped or executed directly. You may consult the ARM ARM for further details on the instruction semantics corresponding to these hypercalls. The goal of this part of the programming exercise is to implement handlers for these hypercalls.
The first hypercall implementation that should be attempted is
HYPERCALL_MSR_CPSR_C. This should set only the 8 LSBs in the VCPU
CPSR with the contents of
The second hypercall implementation is
HYPERCALL_RFE. Here, the VCPU
CPSR registers require updating with the memory contents of
r14_svc+4 respectively. You should use
PTRACE_PEEKDATA to obtain these values from the guest process address space. See
man 2 ptrace for documentation on this system call. When completed correctly, the following should be observed, i.e. the interleaved execution of the two guest threads:
If you’ve reached this point and completed the above exercises, congratulations! You’ve finished the implementation of and have running on your phone (or emulator) a demonstrator of some of the core concepts behind a mobile hypervisor.
To enter the prize draw, or if you’d just like some feedback on the implementation, please send the completed
IAP_ROOT/IAPMobile/jni/vmm.c to htuch [at] vmware [dot] com.
The VMware Academic Program (VMAP) sponsors the Academic Research Awards each year via a Request for Proposals (RFP) in a focused technical area. The topic for our Spring 2012 solicitation is Security for Virtualized and Cloud Platforms. More details are available here.
Past Award Information
Performance Management Challenges in Virtualized Environments (Spring 2011)
Energy Efficient and Reliable Server Consolidation for the Cloud
Ayse Coskun, Boston University
Hardware Performance Monitoring of Scientific Applications in VMware Environments
Shirley Moore & Dan Terpsstra, University of Tennessee
Flexible Computing with VM Fork
Eyal de Lara, University of Toronto
Cloud Computing (Spring 2010)
VMware Academic Programs
On modern processors, hardware-assisted virtualization outperforms classical binary translation for most workloads. But hardware virtualization has a potential problem: virtualization exits are expensive. While hardware virtualization executes guest instructions at native speed, guest/VMM transitions can sap performance. Hardware designers attacked this problem both by reducing guest/VMM transition costs and by adding architectural extensions such as nested paging support to avoid exits.
This paper proposes complementary software techniques for reducing the exit frequency. In the simplest form, our VMM inspects guest code dynamically to detect back-to-back pairs of instructions that both exit. By handling a pair of instructions when the first one exits, we save 50% of the transition costs. Then, we generalize from pairs to clusters of instructions that may include loops and other control flow. We use a dynamic binary translator to generate, and cache, custom-translations for handling exits. The analysis cost is paid once, when the translation is generated, but amortized over many future executions.
Our techniques have been fully implemented and validated in recent versions of VMware products. We show that clusters can provide some of the same benefits for device I/O performance as can device paravirtualization. Moreover, we demonstrate that clusters often enable substantial gains for nested virtual machines, delivering speedups as high as 1.68x. Intuitively, this result stems from the fact that transitions between the inner guest and VMM are extremely costly, as they are implemented in software by the outer VMM.
Interrupt coalescing is a well known and proven technique for reducing CPU utilization when processing high IO rates in network and storage controllers. Virtualization introduces a layer of virtual hardware for the guest operating system, whose interrupt rate can be controlled by the hypervisor. Unfortunately, existing techniques based on high-resolution timers are not practical for virtual devices, due to their large overhead. In this paper, we present the design and implementation of a virtual interrupt coalescing (vIC) scheme for virtual SCSI hardware controllers in a hypervisor.
We use the number of commands in flight from the guest as well as the current IO rate to dynamically set the degree of interrupt coalescing. Compared to existing techniques in hardware, our work does not rely on high-resolution interrupt-delay timers and thus leads to a very efficient implementation in a hypervisor. Furthermore, our technique is generic and therefore applicable to all types of hardware storage IO controllers which, unlike networking, don’t receive anonymous traffic. We also propose an optimization to reduce inter-processor interrupts (IPIs) resulting in better application performance during periods of high IO activity. Our implementation of virtual interrupt coalescing has been shipping with VMware ESX since 2009. We present our evaluation showing performance improvements in micro benchmarks of up to 18% and in TPC-C of up to 5%.
VMware is pleased to announce the Spring 2011 recipients of the VMware Academic Research Awards. Proposals were solicited in the area of Performance Management Challenges in Virtualized Environments. Congratulations to the following PIs and their research groups!
Energy Efficient and Reliable Server Consolidation for the Cloud
Research to use the analysis of thermal sensor and other telemetry data to inform resource management decisions in large-scale datacenter and cloud environments to create advanced power management capabilities and also to improve application resilience by coupling proactive fault detection with virtual machine live migration.
Ayse Coskun, Assistant Professor in Electrical & Computer Engineering at Boston University
Hardware Performance Monitoring of Scientific Applications in VMware Environments
Research into the semantics of performance counter virtualization to enable the use of the widely-used Performance Application Programming Interface (PAPI) in virtual HPC compute environments. High Performance Computing is one of VMware’s core interest areas — to learn more check out the HPC blog on the Office of the CTO community.
Shirley Moore, Research Associate Professor at the Innovative Computing Laboratory in the Electrical Engineering and Computer Science Department at the University of Tennessee
Flexible Computing with VM Fork
Research in the area of rapid, scale-out cloning and customization of virtual machines, including the issues related to subsequent contraction of resources to enable true elasticity within a cloud environment.
Eyal de Lara, Associate Professor, Computer Systems and Networks Group in the Department of Computer Science and Department of Electrical and Computer Engineering, University of Toronto
These fellowships are awarded to outstanding students pursuing research related to VMware’s business interests. A small number of academic departments are invited to nominate up to two of their Ph.D. candidates for consideration, and a total of two or three awards will be granted. The fellowship includes a cash award to cover tuition and stipend for 12 months.
In addition to core machine virtualization and its applications, we are broadly interested in work related to cloud computing, including massive scale compute, storage, network, management, analytics, and application platforms.
- The VMware Graduate Fellowship includes a cash award to cover tuition and stipend for 12 months, not to exceed $100k.
- Two or three awards will be given each year.
- Recipients are given preference in intern and full-time hiring.
- All areas broadly related to VMware’s business interests will be considered.
- At the time of application, applicants must be currently enrolled, full-time Ph.D. students.
- Applicants must be nominated by their academic department (limit of two nominations per department).
- Nominations are only accepted from academic departments invited to participate in this program.
- An agent of the applicant’s department (e.g., the graduate coordinator; not the applicant) must provide the following supporting material to email@example.com.
- Applicant’s research statement (in PDF, maximum of two single-spaced pages) describing the applicant’s research agenda and its relevance to VMware.
- Applicant’s CV.
- Three letters of support from faculty, one of whom must be the applicant’s advisor.
- All questions should be sent to firstname.lastname@example.org
- Recipients must be enrolled for the full 12 months of the award, and the award cannot be transferred.
- VMware is to be notified immediately if the recipient is unable to accept the award or leaves the Ph.D. program.
- The award cannot be used to augment other awards.
- Incomplete applications will not be considered.
Charging for cloud storage must account for two costs: the cost of the capacity used and the cost of access to that capacity. For the cost of access, current systems focus on the work requested, such as data transferred or I/O operations completed, rather than the exertion (i.e., effort/resources expended) to complete that work. But, the provider’s cost is based on the exertion, and the exertion for a given amount of work can vary dramatically based on characteristics of the workload, making current charging models unfair to tenants, provider, or both. This paper argues for exertion-based metrics, such as disk time, for the access cost component of cloud storage billing. It also discusses challenges in supporting fair and predictable exertion accounting, such as significant inter-workload interference effects for storage access, and a performance insulation approach to addressing them.
Managing resources at large scale while providing performance isolation and efficient use of underlying hardware is a key challenge for any cloud management software. Most virtual machine (VM) resource management systems like VMware DRS clusters, Microsoft PRO and Eucalyptus, do not currently scale to the number of hosts and VMs needed by cloud offerings to support the elasticity required to handle peak demand. In addition to scale, other problems a cloud-level resource management layer needs to solve include heterogeneity of systems, compatibility constraints between virtual machines and underlying hardware, islands of resources created due to storage and network connectivity and limited scale of storage resources.
In this paper, we shed light on some core challenges in building a cloud-scale resource management system based on our last five years of research and shipping cluster resource management products. Furthermore, we discuss various techniques to handle these challenges, along with the pros and cons of each technique. We hope to motivate future research in this area to develop practical solutions to these issues.
Ruby vSphere Console (RVC) is a Linux console UI for vSphere, built on the RbVmomi bindings to the vSphere API. RVC is a console UI for VMware ESX and VirtualCenter. The vSphere object graph is presented as a virtual filesystem, allowing you to navigate and run commands against managed entities using familiar shell syntax. RVC doesn’t (yet) have every feature vSphere Client does, but for common tasks it can be much more efficient than clicking through a GUI.
- Tab Completion
Commands and paths can be tab completed in the usual fashion. Whitespace must be escaped with a backslash.
192.168.1.105:/> mark a dc/vm/foo
192.168.1.105:/> on ~a
PowerOnVM foo: success
192.168.1.105:/> off ~a
PowerOffVM foo: success
Marks allow you to save a path for later use. Refer to a mark by prefixing its name with a tilde. The “ls” command automatically creates numeric marks for each object listed; these are the numbers in the first column. As a special case, you don’t need to use a tilde with numeric marks. The “cd” command automatically creates the mark “~~” pointing to the previous directory. If a mark reference is input instead of a command then RVC will cd to the marked object. Thus, “~~” is a convenient way to toggle between two directories.
When the working directory is a descendant of a Datacenter object, the mark “~” refers to the Datacenter. For example “~/datastore” is a convenient way to get the datastore folder of the current datacenter.
- Ruby Mode
Beginning an input line with “/” causes RVC to treat it as Ruby code and eval it. This gives you direct access to the underlying RbVmomi library. If the line “//” is input then RVC will toggle between shell and Ruby mode.
Marks can be easily used in Ruby mode since there are magic variables with the same names. Since some marks, like numeric ones, aren’t valid variable names, they also exist with a “_” prefix.
The methods “this”, “conn”, and “dc” are provided in Ruby mode, returning the current object, connection, and datacenter respectively. The connection object is an instance of RbVmomi::VIM.
- VMODL Introspection
The “type” command can be used to display the properties and methods of a VMODL class. For example: “type Datacenter”.
In Ruby mode, a ’#’ at the end of the input line will display the output of the “type” command for the resulting object’s class. This is very useful for exploring the vSphere API.
- Multiple Connections
RVC can connect to more than one ESX or VC server at the same time. Simply add more hosts to the command line, or use the command “connect”. Each connection is represented by a top-level node in the virtual filesystem. If more than one host is given on the command line, RVC will start in the root of the filesystem instead of automatically cd’ing to a connection.
RVC is designed to let users easily add commands they need. You can create a command module, or add to an existing one, by adding a Ruby file to ~/.rvc or any directory on the RVC_MODULE_PATH environment variable. The syntax of a user command module is the same as those built-in to RVC, so see the “lib/rvc/modules” directory for examples.
If you create a generically useful RVC command, please consider sending in a patch so everyone can use it.
VMware funds, facilitates, and participates in research with many prestigious academic institutions. Check out some of the cool projects we are working on!
MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in collaboration with VMware
Consolidation with virtual machines allows cost savings and offers energy savings and efficiency. However, the challenge is to maximize consolidation while honoring service commitments (often called SLAs or Service Level Agreements). We are addressing this challenge in ways that exploit genetic programming and reinforcement-based machine learning algorithms. We are evaluating whether it is possible to predict impending overload of a virtual machine’s resources and effectively mitigate the situation in a timely way by reallocating resource types which respond at appropriate timescales.
CMU Parallel Data Lab (PDL) in collaboration with VMware
In a large-scale collaboration, CMU PDL is standing up and operating a state-of-the-art cloud based on VMware software and HP hardware (with Intel processors and Samsung DRAM and SSDs) implementing the vCloud APIs. The PDL vCloud will replace a multitude of single-purpose clusters, managed and underutilized by individual groups, with an IaaS private cloud for class projects, simulations, data analyses, and cluster and data-intensive computing activities. Moreover, it is an invaluable resource for studying the usage patterns and demands placed on clouds in academic settings, whose workloads we think will be representative of analytics-heavy non-academic activities. CMU PDL is deeply instrumenting the infrastructure, at various levels and in cooperation with VMware and HP experts, working with different groups to understand their specific requirements, and exploring/developing automation approaches tuned to the challenging characteristics of not-for-profit academic environments.
Elasticity should be treated as a first class system parameter. Particularly in large cloud environments, elastic applications would benefit if the underlying infrastructure provided primitives for elasticity and were themselves elastic. If you want to provide an elastic service and the cloud does not provide good primitives for the degree of elasticity you require, then you are forced to over-provision –- acquire more resources than you instantaneously need and subsequently hoard them. Doing so hinders the cloud’s ability to optimize global system utilization. Free or idle resources become hidden. If however, each cloud layer provides appropriate primitives that permit resources to be acquired and released at a scale that is equal to or better than what is required, then hoarding is less likely to occur. This permits the cloud infrastructure to collectively migrate resources to the real demand. To achieve this in a multi-layer system, demand must be transparently reflected from top to bottom. We must focus on the design and evaluation of primitives for expressing and managing elasticity at all levels, across nodes, and potentially across data centers.
The ability to save and restore the state of running systems can enable a variety of useful features like suspend-to-disk, system checkpointing, system migration, and many others. Unfortunately, restoring a saved system is time-consuming, discouraging the use of save and restore features. Restoring is expensive primarily because fetching a memory image from a mechanical disk or over the network can take tens of seconds, especially when the image is several gigabytes. We propose a hybrid approach, called working set restore, where the working set is prefetched before the system starts and the rest of memory is lazily restored. By prefetching the working set, working set restore avoids the performance degradation of lazy restore, while still effectively hiding most of the latency of reading the saved memory image.
A real-time scheduler is robust (sustainable) for a certain task set if its schedulability is preserved under lighter system load by the scheduler. The first part of this paper shows that NPr (non-preemptive) robustness of a zero-concrete periodic task set against increase in period is sufficient to guarantee NPr
robustness for all variants of the task set. This proof includes the corresponding concrete or non-concrete periodic and sporadic task sets against any kind of reduction in system load.
Based on this result, the second part of this paper gives the necessary and sufficient conditions for robustness for both NPr fixed-priority (NPFP) and NPr earliest-deadline first (NPEDF) schedulers under both discrete time and dense time assumption separately.
In this paper, we examine data from real-world virtualized deployments to characterize common management workflows and assess their impact on resource usage in the datacenter. We show that while many end-user applications are fairly light on I/O requirements, the management workload has considerable network and disk I/O requirements. We show that the management workload scales with the increasing compute power in the datacenter. Finally, we discuss the implications of this management workload for the datacenter.
Runtime Environments/Systems, Layering, and Virtualized Environments (RESoLVE) Workshop
March 5, 2011
Newport Beach, California
Held in conjunction with ASPLOS 2011
Alex Garthwaite & Orran Krieger (VMware, Inc.)
Today, applications are targeted at high-level runtime systems and frameworks. At the same time, the operating systems on which they run are increasingly split between guest operating systems and (hardware) virtual machines. These trends are enabling applications to be written, tested, and deployed more quickly as well as simplifying tasks such as checkpointing, providing services such as fault-tolerance and movement of VMs, and making better, more power-efficient use of hardware infrastructure.
However, much current work on virtualization still focuses on running unmodified legacy systems and most higher-level runtime systems ignore the fact that they are deployed in virtual environments. What is needed is a forum to discuss how these separate layers can take advantage of each others’ services, how this can lead to simpler/easier-to-reason-about overall designs, and how this may even enable legacy systems to more readily take advantage new hardware and software capabilities through virtual appliances and services.
The aim of the workshop is to discuss works-in-progress around how these layers interact and complement each other, and how best to support new software architectures:
- better structuring/communication of services and divisions of labor
- trade-offs in the boundary between trusted and untrusted bases and mechanisms to provide information/feedback across the layers
- approaches for particular services (memory-management/garbage-collection, synchronization/signalling/scheduling)
- prototypes demonstrating combinations of SW- and HW-based techniques to provide better isolation, scaling, and quality-of-service
- visualization/introspection techniques for understanding the resulting systems
The goal is to complement the larger discussion at VEE and ASPLOS.
|Submission||January 19, 2011 *extended*|
|Notification||February 1, 2011|
|Final Version||February 18, 2011|
Submissions should report on original works-in-progress. Submissions will be judged based upon their correctness, relevance, originality, significance, and clarity.
The workshop is seeking submissions for both short (15-minute) and longer (25-minute) presentations. Short submissions should be between 3 and 4 pages in length and long submissions should be between 6 than 8 pages. Submissions should include the full list of authors and affiliations, and be in standard double-column ACM conference format, and submitted in PDF format. Templates for ACM format are available for Microsoft Word and LaTeX at http://www.sigplan.org/authorInformation.htm (use the 9 pt. template). The submission site is now open here.
|10:00 a.m.||30 minute break|
|10:30 a.m.||Three 30 minute talks|
|1:00 p.m.||Three 30 minute talks|
|2:30 p.m.||15 minute break|
|2:45 p.m.||Four 15 minute talks|
|3:45 p.m.||15 minute break|
|4:00 p.m.||Concluding panel & discussion|
- Angela Demke Brown (University of Toronto)
- Dave Dice (Sun Labs at Oracle)
- Tim Harris (Microsoft Research)
- Steve Hand (University of Cambridge)
- Mick Jordan (Oracle Labs)
- Doug Lea (State University of New York at Oswego)
- Filip Pizlo (Purdue University)
- Ian Rogers (Azul)
- Dilma da Silva (IBM Research)
- Mario Wolczko (Oracle Labs)