| ============== |
| Page migration |
| ============== |
| |
| Page migration allows moving the physical location of pages between |
| nodes in a NUMA system while the process is running. This means that the |
| virtual addresses that the process sees do not change. However, the |
| system rearranges the physical location of those pages. |
| |
| Also see Documentation/mm/hmm.rst for migrating pages to or from device |
| private memory. |
| |
| The main intent of page migration is to reduce the latency of memory accesses |
| by moving pages near to the processor where the process accessing that memory |
| is running. |
| |
| Page migration allows a process to manually relocate the node on which its |
| pages are located through the MF_MOVE and MF_MOVE_ALL options while setting |
| a new memory policy via mbind(). The pages of a process can also be relocated |
| from another process using the sys_migrate_pages() function call. The |
| migrate_pages() function call takes two sets of nodes and moves pages of a |
| process that are located on the from nodes to the destination nodes. |
| Page migration functions are provided by the numactl package by Andi Kleen |
| (a version later than 0.9.3 is required. Get it from |
| https://github.com/numactl/numactl.git). numactl provides libnuma |
| which provides an interface similar to other NUMA functionality for page |
| migration. cat ``/proc/<pid>/numa_maps`` allows an easy review of where the |
| pages of a process are located. See also the numa_maps documentation in the |
| proc(5) man page. |
| |
| Manual migration is useful if for example the scheduler has relocated |
| a process to a processor on a distant node. A batch scheduler or an |
| administrator may detect the situation and move the pages of the process |
| nearer to the new processor. The kernel itself only provides |
| manual page migration support. Automatic page migration may be implemented |
| through user space processes that move pages. A special function call |
| "move_pages" allows the moving of individual pages within a process. |
| For example, A NUMA profiler may obtain a log showing frequent off-node |
| accesses and may use the result to move pages to more advantageous |
| locations. |
| |
| Larger installations usually partition the system using cpusets into |
| sections of nodes. Paul Jackson has equipped cpusets with the ability to |
| move pages when a task is moved to another cpuset (See |
| :ref:`CPUSETS <cpusets>`). |
| Cpusets allow the automation of process locality. If a task is moved to |
| a new cpuset then also all its pages are moved with it so that the |
| performance of the process does not sink dramatically. Also the pages |
| of processes in a cpuset are moved if the allowed memory nodes of a |
| cpuset are changed. |
| |
| Page migration allows the preservation of the relative location of pages |
| within a group of nodes for all migration techniques which will preserve a |
| particular memory allocation pattern generated even after migrating a |
| process. This is necessary in order to preserve the memory latencies. |
| Processes will run with similar performance after migration. |
| |
| Page migration occurs in several steps. First a high level |
| description for those trying to use migrate_pages() from the kernel |
| (for userspace usage see the Andi Kleen's numactl package mentioned above) |
| and then a low level description of how the low level details work. |
| |
| In kernel use of migrate_pages() |
| ================================ |
| |
| 1. Remove folios from the LRU. |
| |
| Lists of folios to be migrated are generated by scanning over |
| folios and moving them into lists. This is done by |
| calling folio_isolate_lru(). |
| Calling folio_isolate_lru() increases the references to the folio |
| so that it cannot vanish while the folio migration occurs. |
| It also prevents the swapper or other scans from encountering |
| the folio. |
| |
| 2. We need to have a function of type new_folio_t that can be |
| passed to migrate_pages(). This function should figure out |
| how to allocate the correct new folio given the old folio. |
| |
| 3. The migrate_pages() function is called which attempts |
| to do the migration. It will call the function to allocate |
| the new folio for each folio that is considered for moving. |
| |
| How migrate_pages() works |
| ========================= |
| |
| migrate_pages() does several passes over its list of folios. A folio is moved |
| if all references to a folio are removable at the time. The folio has |
| already been removed from the LRU via folio_isolate_lru() and the refcount |
| is increased so that the folio cannot be freed while folio migration occurs. |
| |
| Steps: |
| |
| 1. Lock the page to be migrated. |
| |
| 2. Ensure that writeback is complete. |
| |
| 3. Lock the new page that we want to move to. It is locked so that accesses to |
| this (not yet up-to-date) page immediately block while the move is in progress. |
| |
| 4. All the page table references to the page are converted to migration |
| entries. This decreases the mapcount of a page. If the resulting |
| mapcount is not zero then we do not migrate the page. All user space |
| processes that attempt to access the page will now wait on the page lock |
| or wait for the migration page table entry to be removed. |
| |
| 5. The i_pages lock is taken. This will cause all processes trying |
| to access the page via the mapping to block on the spinlock. |
| |
| 6. The refcount of the page is examined and we back out if references remain. |
| Otherwise, we know that we are the only one referencing this page. |
| |
| 7. The radix tree is checked and if it does not contain the pointer to this |
| page then we back out because someone else modified the radix tree. |
| |
| 8. The new page is prepped with some settings from the old page so that |
| accesses to the new page will discover a page with the correct settings. |
| |
| 9. The radix tree is changed to point to the new page. |
| |
| 10. The reference count of the old page is dropped because the address space |
| reference is gone. A reference to the new page is established because |
| the new page is referenced by the address space. |
| |
| 11. The i_pages lock is dropped. With that lookups in the mapping |
| become possible again. Processes will move from spinning on the lock |
| to sleeping on the locked new page. |
| |
| 12. The page contents are copied to the new page. |
| |
| 13. The remaining page flags are copied to the new page. |
| |
| 14. The old page flags are cleared to indicate that the page does |
| not provide any information anymore. |
| |
| 15. Queued up writeback on the new page is triggered. |
| |
| 16. If migration entries were inserted into the page table, then replace them |
| with real ptes. Doing so will enable access for user space processes not |
| already waiting for the page lock. |
| |
| 17. The page locks are dropped from the old and new page. |
| Processes waiting on the page lock will redo their page faults |
| and will reach the new page. |
| |
| 18. The new page is moved to the LRU and can be scanned by the swapper, |
| etc. again. |
| |
| Non-LRU page migration |
| ====================== |
| |
| Although migration originally aimed for reducing the latency of memory |
| accesses for NUMA, compaction also uses migration to create high-order |
| pages. For compaction purposes, it is also useful to be able to move |
| non-LRU pages, such as zsmalloc and virtio-balloon pages. |
| |
| If a driver wants to make its pages movable, it should define a struct |
| movable_operations. It then needs to call __SetPageMovable() on each |
| page that it may be able to move. This uses the ``page->mapping`` field, |
| so this field is not available for the driver to use for other purposes. |
| |
| Monitoring Migration |
| ===================== |
| |
| The following events (counters) can be used to monitor page migration. |
| |
| 1. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a |
| page was migrated. If the page was a non-THP and non-hugetlb page, then |
| this counter is increased by one. If the page was a THP or hugetlb, then |
| this counter is increased by the number of THP or hugetlb subpages. |
| For example, migration of a single 2MB THP that has 4KB-size base pages |
| (subpages) will cause this counter to increase by 512. |
| |
| 2. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for |
| PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages, |
| if it was a THP or hugetlb. |
| |
| 3. THP_MIGRATION_SUCCESS: A THP was migrated without being split. |
| |
| 4. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split. |
| |
| 5. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had |
| to be split. After splitting, a migration retry was used for its sub-pages. |
| |
| THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or |
| PGMIGRATE_FAIL events. For example, a THP migration failure will cause both |
| THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase. |
| |
| Christoph Lameter, May 8, 2006. |
| Minchan Kim, Mar 28, 2016. |
| |
| .. kernel-doc:: include/linux/migrate.h |