המצגת נטענת. אנא המתן

המצגת נטענת. אנא המתן

Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

מצגות קשורות


מצגת בנושא: "Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky"— תמליל מצגת:

1 Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky
Virtualization Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

2 What is virtualization?
Creating a virtual version of something Hardware, operating system, application, network, memory, storage “The construction of an isomorphism between a guest system and a host” [Popek, Goldberg, ’74] מה זה בכלל וירטואליזציה? במובן המילולי, וירטואלי פירושו מדומה. בהקשר של מערכות מחשב, וירטואליזציה מדמה מערכת אורחת – Guest system באמצעות מערכת מארחת – Host system כאשר כל מצב במערכת האורחת ממופה למצב מקביל במערכת המארחת ולכל מעבר בין שני מצבים במערכת האורחת יש מעבר מקביל בזו המארחת. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

3 Example: virtual disk Partition a single hard disk to multiple virtual disks Virtual disk has virtual tracks & sectors Implement virtual disk by file Map between virtual disk and real disk contents Virtual disk write/read mapped to file write/read in host system לדוגמא, אנחנו יכולים לחלק דיסק פיזי לכמה דיסקים וירטואליים, כאשר כל דיסק וירטואלי מיוצג על ידי קובץ. התוכן של הדיסק הוירטואלי נשמר כתוכן של הקובץ וכל פניה לקריאה או כתיבה של בלוק בדיסק הוירטואלי ממופה לקריאה או כתיבה מהאופסט המתאים בקובץ. בהקשר של הקורס שלנו, אנחנו נעסוק בוירטואליזציה של חומרה ומערכות הפעלה.

4 What is virtualization? (continued)
A way to run multiple operating systems (and their applications) on the same hardware (virtual machines) Only virtual machine manager (a.k.a. hypervisor) has full system control Virtual machines completely isolated from each other (or so we hope) בהקשר של הדיון שלנו, הטכנולוגיה של וירטואליזציה מאפשרת ליצור כמה מערכות וירטואליות שאנחנו נקרא להן מכונות וירטואליות, שרצות מעל אותה חומרה כאשר כל אחת מהן יכולה להריץ מערכת הפעלה שונה ו/או להשתמש בשירותים של חומרה שונה. כדי שהמערכות האלו יהיו מבודדות אחת מן השניה, במחשב כזה רצה תוכנת מערכת שנקראת virtual machine manager או hypervisor שרק לה יש גישה מלאה למשאבי המערכת, בעוד המכונות הוירטואליות רצות ברמת הרשאות (privilege) נמוכה יותר. כל מכונה וירטואלית כזו מקבלת נתח ממשאבי המערכת – זמן CPU, main memory, מקום על הדיסק וכולי. [ועדיין, הפרדה מלאה לגמרי קשה להשיג. למשל, מתברר שמכונה וירטואלית אחת יכולה ללמוד הרבה על האלגוריתמים שרצים על מכונה וירטואלית אחרת באמצעות ניתוח מדויק של דפוסי cache hit/miss, שכן המכונות הוירטואליות חולקות באופן דינמי את משאבי ה-cache המשותף.] Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

5 Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky
Basic concepts Virtual Machine (VM) Host Guest Hypervisor (type ||) / Virtual Machine Monitor כמה מושגים – כאמור אנחנו יכולים להריץ בו זמנית כמה מכונות וירטואליות Virtual machine Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

6 Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky
Basic concepts Virtual Machine (VM) Host Guest Hypervisor (type ||) / Virtual Machine Monitor בהיפרויזור שרץ מעל מערכת הפעלה (קרוי היפרויזור מסוג 2), יש לנו מערכת הפעלה מארחת ויש לנו בכל מכונה וירטואלית מערכת הפעלה אורחת. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

7 Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky
Basic concepts Virtual Machine (VM) Host Guest Hypervisor (type ||) / Virtual Machine Monitor Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

8 Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky
Basic concepts Virtual Machine (VM) Host Guest Hypervisor (type ||) / Virtual Machine Monitor Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

9 Types of virtualization
Full virtualization – guest OS runs unmodified Para-virtualization – guest OS must be aware of virtualization, source-code modifications required Hardware virtualization support may be used for both Our focus is on full virtualization יש יותר מדרך אחת לבצע וירטואליזציה. בוירטואליזציה מלאה, מערכת ההפעלה שרצה בתוך המכונה הוירטואלית רצה כמות שהיא, ללא שום שינויים בקוד. לכן, ההיפרויזור צריך ליצור עבורה אשלייה מלאה כך שלמעט אולי ירידה מסויימת בביצועים, היא תרוץ בדיוק כפי שהיתה רצה ישירות על החומרה, למרות שהיא לא רצה ב supervisor mode. לעומת זאת, כאשר אנחנו יכולים ליצור גירסה שונה של מערכת ההפעלה על ידי התאמה להיפרויזור, אז הגירסה הזו יכולה לבצע קריאות באופן ישיר לשירותים של ההיפרויזור, מה שיכול לפשט את המימוש, לשפר את הביצועים, ולאפשר פונקציונליות נוספת (למשל, תקשורת יעילה בין מכונות וירטואליות שונות). מימושים כאלה נקראים para-virtualization. בארכיטקטורות מודרניות יש גם תמיכה חומרתית בוירטואליזציה (למשל של intel/AMD) מה שמאפשר ביצועים משופרים. אנחנו נתמקד בוירטואליזציה מלאה. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

10 Virtualization advantages
Cost-effectiveness – less hardware Multiple virtual machines / operating systems / services on single physical machine (server consolidation) Various forms of computation as a service Isolation Good for security Great for reliability and recovery: If VM crashes it can be rebooted, does not affect other services (fault containment) VM migration Development tool Work on multiple OS in parallel Develop and debug OS in user mode Origins of VMware as a tool for developers Server consolidation – prior to VMWare ESX server, “IT admins would typically buy, install and configure a new server for every new application/service they had to run in the data center, so servers were very inefficiently utilized and typically used at most 10% of their capacity (during peaks).” [Tanenbaum] Vmotion – live migration of running VMs over the network (no need to reboot OS or restart apps). Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

11 Virtualization vs. Multi-Processing
∙∙∙ User space/ kernel separation OS HW interface HW (disk, NIC,…) Multi-processing VM Pr1 Pr2 Pr1 Pr2 ∙∙∙ OS1 OS2 ∙∙∙ מולטיפרוססינג הוא בעצם סוג של וירטואליזציה, וירטואליזציה מוגבלת של פרוססור, כאשר ניתן לחשוב על process כעל מעבד וירטואלי עליו רצה התוכנית. מערכת ההפעלה היא התוכנה שמאפשרת את הוירטואליזציה המוגבלת הזו, היא רצה בkernel mode בעוד התהליכים עצמם רצים בuser space ולא יכולים לגשת ישירות לחומרה אבל כל תהליך מקבל את האשליה שהוא היחיד שרץ על המעבד. בוירטואליזציה מלאה, מערכת ההפעלה עצמה רצה מעל מה שקרוי virtual HW interface, היא מקבלת את האשליה שיש לה גישה ישירה לחומרה, אבל בעצם היא לא רצה ב privileged mode ואין לה גישה ישירה לחומרה, אלא ההיפרויזור יוצר לה סביבה מדומה של החומרה לה היא מצפה. Virtual HW interface VMM/Hypervisor Virtualization Real HW interface HW (disk, NIC,…) Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

12 Type 1 and type 2 hypervisors
וירטואליזציה מלאה נחלקת לשני סוגים. היפרויזור מסוג 1 (bare metal hypervisor) רץ ישירות מעל החומרה, הוא בעצם מעין מערכת הפעלה מצומצמת - microkernel, בעוד היפרויזור מסוג 2 רץ מעל host OS. להיפרויזור מסוג 1 יש יתרונות ברורים מבחינת יעילות ומבחינה זו שאין צורך לשנות את מערכת ההפעלה ואין אפילו צורך בגישה ל-source code. היפרויזור מסוג 2 פשוט יותר להריץ על מחשב שמשמש גם למטרות אחרות (פיתוח, למשל) ולא רק כhost platform. (הוא יכול גם להשתמש בכל השירותים של מערכת ההפעלה, ולכן יכול ככלל לתמוך ביותר HW drivers מאשר היפרויזורים מסוג 1.) VMware ESX, Microsoft Hyper-V, Xen VMware Workstation, Microsoft Virtual PC, Sun VirtualBox, QEMU, KVM Figure 7-1. Location of type 1 and type 2 hypervisors. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

13 Type 1 and type 2 hypervisors (continued)
Figure 7-2. Examples of the various combinations of virtualization type and hypervisor. Type 1 hypervisors always run on the bare metal whereas type 2 hypervisors use the services of an existing host operating system. VMWare fusion הוא היפרויזור למערכות שמריצות מעה"פ OS X (כמו Mac) עם מעבד אינטל. הוא מאפשר להם להריץ מערכות הפעלה כמו Windows. KVM – תשתית וירטואליזציה בתוך Linux. Parallels – וירטואליזציה למחשבי Mac עם מעבדי אינטל. Wine – open source שמאפשר לאפליקציות ווינדוס לרוץ על מערכות Unix. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

14 What's required of a (classic) hypervisor
Hypervisor should provide the following: Safety: have full control of virtualized resources Fidelity: program behavior on VM should be identical to its behavior on bare hardware Efficiency: As much as possible, run directly on hardware without hypervisor intervention Full interpretation isn't efficient מה אנחנו דורשים מתוכנת היפרויזור? ראשית, צריכה להיות לה שליטה מלאה על משאבי המערכת להם נעשית וירטואליזציה – מעבדים, זכרון, דיסקים, I/O וכולי, מה שאומר שהשליטה הזאת צריכה להילקח מן ה-guest operating system. Fidelity – משמעותו שהקוד שרץ ב-virtual machine לא יכול כלל להבחין שאינו רץ ישירות על החומרה ומתנהג לכן בדיוק באותה דרך בה היה מתנהג לו רץ ישירות על החומרה. יעילות – רוב הקוד של ה-VM צריך לרוץ כמות שהוא – natively, כך שהביצועים קרובים לאלו שהיו מתקבלים בריצה ישירות מעל החומרה. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

15 Classic virtualization: trap and emulate
HW VMM VM1 VM2 Trap (1) Interrupt handler (2) HW emulation Return to process (3) A classical VMM executes guest operating systems directly, but at a reduced privilege level (level 1). The VMM (runs in level 0) intercepts traps from the de-privileged guest, and emulates the trapping instruction against the virtual machine state. This technique has been extensively described in the literature and it is easily verified that the resulting VMM meets the Popek and Goldberg criteria. Emulation is the process of implementing the functionality/interface of one system on a system having different functionality/interface Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

16 Trap and emulate: difficulties on x86
Sensitive instructions: Provide control over HW resources  behave differently in kernel/supervisor and user modes I/O instructions, enable/disable interrupts, access CR3 register… Privileged instructions: cause a trap if executed in user mode Theorem [Popek and Goldberg, 1974] A machine can be virtualized [using trap and emulate] if every sensitive instruction is privileged. It was wrongly assumed that the Popek & Goldberg theorem implies that x86 (before the VT technology was introduced in 2005) cannot be virtualized, while in essence, it only implies that it could not be virtualized using only trap and emulate. Indeed, VMWare were able to virtualize x86 using binary translation, as we will soon see. Not supported by x86 processors prior to 2005 In 2005, Intel/AMD introduced virtualization HW support. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

17 Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky
What is sensitive? CPU – some registers MMU Page table Segments Interrupts Timers IO devices Examples of sensitive operations on x86: POPF – replaces the flags register, thus also changing the interrupts enable/disable bit. In user mode – has no effect. - SGDT/LGDT – store load the global descriptor table register. No effect in user mode. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

18 X86 virtualization problem I
The x86 architecture (w/o virtualization extensions) can't be virtualized by trap and emulate. Some sensitive instructions are not privileged. Example: the popf instruction Pops 16 bits from stack to flags register One of the flags masks (i.e. disables) interrupts The instruction is not privileged What happens if the OS of a VM runs popf? אם מעה"פ תבצע פקודה זו ואחר כך תקרא את הרגיסטר היא תבחין בהתנהגות שונה מזו שהיתה קורית לו היתה רצה ישירות על החומרה. כמו כן, ללא התערבות של ה hypervisor, לא יהיה מבחינת מעה"פ שום שינוי באופן ההתנהגות, כפי שהיה מצופה משינוי הדגלים. זו הפרה של fidelity Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

19 X86 virtualization problem II
Some instructions: push, pop, mov can have code segment selectors (cs, ds, ss) as arguments even in user mode, so they can be read The selectors have two bits that are their current privilege level In x86 (beginning with 386), four privilege levels (ring 0 to ring 3) The two lower bits of the cs register are the Current Privilege Level (CPL) of the code. Guest OS thinks that it is in ring 0. Guest OS is actually in ring 1 Result - guest OS confusion. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

20 Implementation options
Avoid executing sensitive instructions Interpretation (BOCHS, JSLinux). Binary translation – change executed code (VMware, QEMU). Para-virtualization – re-compile guest OS (XEN, Denali). Hardware assistance – Intel VT-x and AMD-V (used by KVM, XEN, Vmware). Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

21 Outline Concepts, classical CPU virtualization Memory virtualization
Binary translation Memory virtualization Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

22 Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky
Binary translation Binary translation is the process of translating one instruction set to another one. Approach I: translate entire OS when loaded to VM Key problem – indirect control flow Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

23 Dynamic binary translation
Approach II: translate code on the fly Simplest approach Keep table mapping old instructions to new instructions. Fetch old instruction. Use table to translate. Execute new instruction(s). Problem: performance Overhead for every instruction similarly to interpretation. Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

24 Dynamic BT with caching
Cache translated code region: After translation run from cache. Translation occurs only once. Static translation cannot handle dynamic control transfer, when: Jump depending on content of memory address. Indirect function call (by function pointer). Translation of dynamic control transfer must be done at execution time. User code does not have to be translated Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

25 Virtualization prior to HW support
Before HW support was introduced in 2005, VMWare leveraged Intel processor’s protection rings and dynamic binary translation. The VMM was put in ring 0 and the OS in ring 1. User code stayed in ring 3. Thus, any access of user code to kernel address space caused a trap. On the other hand, binary translation ensured that any sensitive instruction performed by the kernel traps to the VMM. Any such trap yielded check, possible emulation, and then binary translation of the next kernel basic block + adding a trap at its end, removing sensitive instructions (if any) and replacing them with safe code or a trap to the hypervisor. The branch at the end of the basic block is also replaced by a trap to the hypervisor. User process code can be ignored (no need to translate it). Translated blocks are cached, so they don’t need to be translated again and again. Possible optimization – if a basic block A ends with a jump to basic block B and both were translated, B can be called directly from A without trapping to the hypervisor. Figure 7-4. The binary translation rewrites the guest operating system running in ring 1, while the hypervisor runs in ring 0 Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

26 VMWare binary translation: example
C code 64-bit binary Invoking isPrime(49), logging all code translated Sensitive instructions are rare even in kernels, so performance mainly depends on the translation of regular instructions. Let’s see an example of how regular code is translated. Above on the left, the C code of a primality test function. Above on the right, resulting x86 64-bit assembly code, below – binary (hex) representation of the code. THIS is the input to the binary translator. We’ll see an example of what happens when we run this program with input 49. Translation is done dynamically, in runtime, interleaved with the execution of translated code. It is done on demand, only if code is executed. Translation input is the full x86 instruction set (including all sensitive/privileged instructions), output is a safe subset Binary (hex) representation Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

27 VMWare binary translation: example
Translator reads guest memory at the address indicated by guest PC Decodes instructions, creates Intermediate Representation - IR objects Accumulates IR objects to translation units (TUs) Basic blocks (BB), stops upon control flow First TU Compiled code fragment (CCF) Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

28 VMWare binary translation: example
Translator reads guest memory at the address indicated by guest PC Decodes instructions, creates Intermediate Representation - IR objects Accumulates IR objects to translation units (TUs) Basic blocks (BB), stops upon control flow Identical code First TU Compiled code fragment (CCF) Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

29 VMWare binary translation: example
Translator reads guest memory at the address indicated by guest PC Decodes instructions, creates Intermediate Representation - IR objects Accumulates IR objects to translation units (TUs) Basic blocks (BB), stops upon control flow Translation of jump BB First TU Compiled code fragment (CCF) Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

30 VMWare binary translation: example
Translator reads guest memory at the address indicated by guest PC Parses instructions, creates Intermediate Representation - IR objects Accumulates IR objects to translation units (TUs) Basic blocks (BB), stops upon control flow Translation of fall through BB First TU Compiled code fragment (CCF) Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

31 VMWare binary translation: example
C code 64-bit binary Invoking isPrime(49), logging all code translated Which basic block will be translated next? Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

32 VMWare binary translation: example
C code 64-bit binary Invoking isPrime(49), logging all code translated Which basic block will be translated next? Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

33 VMWare binary translation: example
C code 64-bit binary Invoking isPrime(49), logging all code translated Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

34 VMWare binary translation example: output
Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

35 VMWare binary translation example: output
These continuations remain because respective basic blocks were not executed Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

36 VMWare binary translation operation
Translation cache (TC) stores translations done so far A hash table tracks the input-to-output correspondence Chaining optimization allows one CCF to jump directly to another without calling out of the translation cache As TC gradually captures guest's working set, proportion of translation decreases User code does not have to be translated Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

37 Dealing with privileged instructions: example
The cli (clear interrupts) instruction is privileged Translated to: “vcpu.flags.IP=0” Much faster than source binary! Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

38 Outline Concepts, classical CPU virtualization Memory virtualization
Binary translation Memory virtualization Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

39 Memory allocation Each VM usually receives a contiguous set of physical addresses. 1 Gbyte– 4 Gbyte are typical values. As far as VM is concerned, this is the physical memory of the machine. The guest OS allocates pages to guest processes.

40 Memory management Assumptions of OS in VM: Hypervisor must:
Physical memory is a contiguous block of addresses from 0 to some n. OS can map any virtual page to any page frame. Hypervisor must: Partition memory among VMs. Ensure virtual page mapping only to assigned page frames. TLB miss: cache miss in HW-managed TLB (e.g. x86) causes HW to select a page from page table. VM OS must not manage real page table. Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

41 Option 1: brute force Guest OS Hypervisor TLB CPU CR3 HW
Define these pages as not R/W Guest OS Hypervisor Page table VMM SW VM memory layout Page dir. Interrupt & VMM corrects address. TLB CPU CR3 HW Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

42 Brute force – description
Guest page tables are read and write protected in host system. If guest OS reads page table (e.g. for page eviction), writes page table (e.g. after page fault), or changes CR3, the system traps. The hypervisor then uses a VM memory layout to: Return answers to VM Update the layout Hypervisor switches VM memory layout when new VM is scheduled. Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

43 Option 2: shadow page tables
Guest OS Hypervisor Page table VMM SW Shadow page table Page dir. Interrupt & VMM corrects page table. G-CR3 TLB CPU CR3 HW Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

44 Shadow page tables – description
Hypervisor maintains “shadow page tables”. Guest page tables map: Guest VA (GVA) Guest PA (GPA) Shadow tables map: Guest VA Host PA (HPA). Hypervisor does not trap guest updates to its page table. Result – inconsistent guest page table and shadow page table. When guest process accesses virtual address The physical address is not in the guest page table, but in the shadow page table. HW translates correctly, because it is aware only of shadow tables. Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

45 Shadow page tables – description (continued)
If address in TLB – TLB hit and no problem. When guest process causes a page fault Hypervisor begins execution. If required, hypervisor updates shadow page table. Performance is as good as native execution as long as there are no page faults. Shadow page tables should be cached so that once a VM is re-scheduled the page table does not have to be rebuilt from scratch. Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

46 Shadow page tables – page faults (continued)
Two scenarios when handling a page fault. Hypervisor ``walks’’ guest page table to determine which it is. Guest page fault – No translation in guest page tables  ``inject’’ page fault for guest to handle Guest translation found  update shadow table respectively Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

47 Shadow page tables – updating CR3
Guest Page Table Guest Page Table Guest Page Table Virtual CR3 Real CR3 Shadow Page Table Shadow Page Table Shadow Page Table Whenever the guest OS changes CR3 (for instance, upon a process context switch), it traps to the VMM which changes the shadow CR3 corresponding to the respective process accordingly. Slide taken from a presentation by VMWare.

48 Shadow page tables – updating CR3
Guest Page Table Guest Page Table Guest Page Table Virtual CR3 Real CR3 Shadow Page Table Shadow Page Table Shadow Page Table Slide taken from a presentation by VMWare.

49 Shadow page tables – updating CR3
Guest Page Table Guest Page Table Guest Page Table Virtual CR3 Real CR3 Shadow Page Table Shadow Page Table Shadow Page Table Slide taken from a presentation by VMWare.

50 Undiscovered guest page table
Virtual CR3 Real CR3 Shadow Page Table Shadow Page Table Shadow Page Table Whenever the VMM encounters a new value of the CR3 (pointing to a new page table), it should create a new instance of the shadow page tables structure. Slide taken from a presentation by VMWare.

51 Undiscovered guest page table
Virtual CR3 Real CR3 Shadow Page Table Shadow Page Table Shadow Page Table Shadow Page Table Slide taken from a presentation by VMWare.

52 Option 3: Extended/nested page tables
Guest OS Hypervisor Page table VMM SW Host page table Page dir. Host page table Host page table TLB CPU CR3 EPTP HW Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

53 Nested/extended page tables - description
The name implies having page tables within page tables. The essence of the idea is a hardware assist. Hardware has an extra pointer and the ability to walk an extra set of page tables. Idea is called Extended Page Tables (EPT) by Intel Guest page tables hold Guest VA Guest PA mapping, access by standard CR3 Extended page tables hold Host VA  Host PA mapping, access by EPTP (EPT pointer). Host VA=Guest PA Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

54 Walking extended page tables
Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

55 Extended page tables – description (cont'd)
TLB as usual holds Guest VA Host PA On memory access If found in TLB – no problem. If not in TLB, but no page fault, hardware walks both tables and updates TLB. If page fault, then hypervisor gets host virtual page (guest physical page) and maps it to host physical page. Operating Systems, Spring 2018, I. Dinur, D. Hendler and R. Iakobashvili

56 Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky
Sources “Modern operating systems”, 4‘th edition, A. Tanenbaum and H. Bos “Virtual machines”, J. E. Smith and R. Nair A presentation by Niv Gilboa from “Formal requirements for virtualizable third generation architectures”, G. J. Popek and R. P. Goldberg, CACM, 1974 “A comparison of software and hardware techniques for x86 virtualization”, K. Adams and O. Ageson, ASPLOS 2006 A presentation by VMWare Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky


הורד את "ppt "Operating Systems 2019, I. Dinur , D. Hendler and M. Kogan-Sadetsky

מצגות קשורות


מודעות Google