手足口病用什么药最好| 放疗有什么副作用| 7月7号是什么节日| 奇葩是什么意思| 恐龙生活在什么时代| 什么牌子的蛋白质粉比较好| 1977年出生属什么生肖| 乳糖不耐受是什么原因导致的| 内痔用什么药治最好效果最快| 屎为什么是黑色的| 榻榻米是什么| 祭日和忌日是什么意思| 雪霁是什么意思| 易激惹是什么意思| 逍遥丸什么时候吃| 万加一笔是什么字| 荞麦长什么样子| 人参和什么泡酒能壮阳| 恺字五行属什么| 不典型鳞状细胞是什么意思| 08年是什么年| 香港什么时候回归| 十月五号是什么星座| 出海是什么意思| 小孩疝气是什么症状| 李讷为什么不姓毛| 少年白头发是什么原因| 簸箕是什么| 骤雨落宿命敲什么意思| 喝酒伤什么器官| 甲状腺穿刺是什么意思| 静电对人体有什么危害| 什么运动瘦肚子| 梦见野猪是什么预兆| 医保什么时候到账| 返现是什么意思| bgm网络语什么意思| mtd是什么意思| 普通健康证都检查什么| 民政局局长什么级别| 明天吃什么| 包皮手术是什么| 三高可以吃什么水果| 为什么青霉素要做皮试| 海肠是什么东西| 擦枪走火什么意思| 尿毒症有什么症状| 顺理成章是什么意思| 工作室是干什么的| 尤物是什么意思| 幸福是什么的经典语录| 2000年为什么叫千禧年| 狐媚是什么意思| rush是什么意思| 什么的去路| 耳朵痛用什么药| 审计署是什么级别| 吃什么东西养胃| 葬爱家族是什么意思| blanc什么意思| 闪光感是什么感觉| 6月17日什么星座| 吃什么对肠道好| 小儿积食吃什么药最好| 多发肿大淋巴结是什么意思| 甲亢病有什么症状| 滑精是什么原因| 多走路有什么好处| dostinex是什么药| 睡觉多梦是什么原因引起的| 湿气重吃什么水果| 3月27号是什么星座| 肉瘤是什么| 6月25号是什么星座| 黑毛茶是什么茶| 做完胃肠镜后可以吃什么| 肉桂是什么味道| 3月是什么星座的| 右边肋骨下面是什么器官| 芒果不可以跟什么一起吃| 牛肉饺子馅配什么蔬菜好吃| 银行卡户名是什么意思| 大便很细是什么原因| 心情烦躁吃什么药| burberry是什么品牌| 子宫内膜手术后需要注意什么| 去香港澳门旅游需要准备什么| 澳大利亚有什么动物| 副厅级是什么级别| 深棕色是什么颜色| 不期而遇什么意思| 膀胱冲洗用什么药| 怀孕生化是什么意思| 人大副主任是什么级别| 舌头肥大有齿痕是什么原因| 什么叫自私的人| 什么菜好吃| 阴道炎症是什么症状| 女人来月经有血块是什么原因| 肋骨中间是什么器官| 入伏吃什么| 下压高是什么原因引起的| 什么叫游走性关节疼痛| 口腔溃疡喝什么水| 氨水是什么| 卵胎生是什么意思| 自主意识是什么意思| 胸口疼痛是什么原因| 冠心病是什么病| 公积金取出来有什么影响| 两仪是什么意思| 情感什么意思| 电灯泡什么意思| 男马配什么属相最好| 鱼代表什么数字| 9.24是什么星座| 五二年属什么生肖| 看乳房挂什么科| 透明隔间腔是什么意思| 氢是什么| 皮肤过敏有什么妙招| 趴着睡觉有什么坏处| 什么是电信诈骗| 上夜班吃什么对身体好| 多管闲事是什么意思| 容易静电的人说明什么| 鸦片鱼头是什么鱼| 高什么远什么| 脚背有痣代表什么| 碧螺春是什么茶| 下作是什么意思| 刷题是什么意思| 去阴虱用什么药最好| 黑科技是什么意思| 狗狗能看见什么颜色| 心动过缓吃什么药最好| 舌头发热是什么原因| 无赖不还钱最怕什么| 用进废退是什么意思| gpt什么意思| 深海鱼都有什么鱼| 淋巴肉为什么不能吃| lynn是什么意思| 喝酒后手麻是什么原因| 胃镜预约挂什么科| 边界清是什么意思| 内分泌失调是什么意思| 才女是什么意思| 腿部抽筋是什么原因| 精液是什么味| 黄花苗泡水喝有什么作用| 鹿角粉有什么功效和作用| 比心什么意思| 三奇贵人是什么意思| 烟青色是什么颜色| 柳絮吃了有什么好处| dunhill是什么品牌| 验孕棒什么时候测| 什么叫自然拼读| 性质是什么| 口腔溃疡吃什么维生素| 什么是浅表性胃炎| 月经量少发黑是什么原因| 多管闲事是什么意思| 口红是用什么做的| 海葡萄是什么东西| 屎为什么是臭的| 碱性食物对身体有什么好处| 太监是什么| 愿闻其详是什么意思| 相什么并什么| 微量元素检查挂什么科| 蛇的眼睛是什么颜色| 笔记本电脑什么牌子好| 水乳什么牌子好用| 怎么知道自己适合什么发型| 胃窦溃疡a1期是什么意思| 麦冬什么时候种植| 1959属什么生肖| 大出血是什么症状| 眉毛有什么作用| 高我是什么意思| 内敛是什么意思| 树大招风的意思是什么| 什么叫姑息治疗| 什么是血清| 蓝色加红色等于什么颜色| 什么是植物神经紊乱| 睡觉打呼噜是什么原因| 尿是褐色的是什么原因| 息肉样增生是什么意思| 头是什么意思| 香菜什么时候种植最好| 去皱纹用什么方法最好和最快| 脂肪分解成什么| 最聪明的动物是什么| 脑震荡有什么症状| 痰湿体质吃什么中成药| 吃什么可以增强抵抗力和免疫力| 慢性气管炎吃什么药最有效| 美妙绝伦是什么意思| 龙须菜是什么菜| 紫色是什么颜色调出来的| 四月二十一是什么星座| 无机盐包括什么| 什么是违反禁令标志指示| 甲状腺回声不均匀什么意思| 后果自负是什么意思| 属狗男和什么属相最配| 新零售是什么意思| 更年期失眠吃什么药调理效果好| 全科门诊主要看什么| 为什么德牧不能打| 拖拖拉拉什么意思| 1950年是什么年| 到底为什么| 练字用什么笔好| 酒花浸膏是什么| gu是什么品牌| 欢五行属什么| 透亮是什么意思| 398是什么意思| 足癣用什么药膏| 看乙肝挂什么科| 节育是什么意思| 配菜是什么意思| 脑血管痉挛是什么症状| 腱鞘炎什么症状| 动脉硬化挂什么科| 凝神是什么意思| 鸭子烧什么好吃| 陈赫的老婆叫什么名字| 排卵期有什么| coser什么意思| 印度是什么教| 狼吃什么| sjb是什么意思| dxm是什么药| 脑血管堵塞吃什么药好| 星星是什么的眼睛| 灌肤是什么意思| 什么减肥药最安全| 健康证什么时候可以办| 臭嗨是什么意思| 关羽使用的武器是什么| 肾衰竭五期是什么意思| 松花蛋不能和什么一起吃| 军校出来是什么军衔| 咖喱是什么东西| 尿里有泡沫是什么病| 茴三硫片主治什么| 人文是什么意思| 身份证带x是什么意思| auc是什么意思| 什么的猴子| 工作效率等于什么| 娃娃鱼吃什么| 睾丸扭转是什么导致的| hpv疫苗什么时候打最好| 38线是什么意思| click什么意思| 过敏打什么针| 德行是什么意思| 2157是什么意思| 百度

用车网友诠释司机与行人的关系 看完想说点什

百度 如果赔偿人数超过25000人,那么每个人获得的赔偿将减少。

In computer operating systems, memory paging is a memory management scheme that allows the physical memory used by a program to be non-contiguous.[1] This also helps avoid the problem of memory fragmentation and requiring compaction to reduce fragmentation.

Paging is often combined with the related technique of allocating and freeing page frames and storing pages on and retrieving them from secondary storage[a] in order to allow the aggregate size of the address spaces to exceed the physical memory of the system.[2] For historical reasons, this technique is sometimes referred to as swapping.

When combined with virtual memory, it is known as paged virtual memory. In this scheme, the operating system retrieves data from secondary storage in blocks of the same size (pages). Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.

Hardware support is necessary for efficient translation of logical addresses to physical addresses. As such, paged memory functionality is usually hardwired into a CPU through its Memory Management Unit (MMU) or Memory Protection Unit (MPU), and separately enabled by privileged system code in the operating system's kernel. In CPUs implementing the x86 instruction set architecture (ISA) for instance, the memory paging is enabled via the CR0 control register.

History

edit

In the 1960s, swapping was an early virtual memory technique. An entire program or entire segment would be "swapped out" (or "rolled out") from RAM to disk or drum, and another one would be swapped in (or rolled in).[3][4] A swapped-out program would be current but its execution would be suspended while its RAM was in use by another program; a program with a swapped-out segment could continue running until it needed that segment, at which point it would be suspended until the segment was swapped in.

A program might include multiple overlays that occupy the same memory at different times. Overlays are not a method of paging RAM to secondary storage[a] but merely of minimizing the program's RAM use. Subsequent architectures used memory segmentation, and individual program segments became the units exchanged between secondary storage and RAM. A segment was the program's entire code segment or data segment, or sometimes other large data structures. These segments had to be contiguous when resident in RAM, requiring additional computation and movement to remedy fragmentation.[5]

Ferranti's Atlas, and the Atlas Supervisor developed at the University of Manchester,[6] (1962), was the first system to implement memory paging. Subsequent early machines, and their operating systems, supporting paging include the IBM M44/44X and its MOS operating system (1964),[7] the SDS 940[8] and the Berkeley Timesharing System (1966), a modified IBM System/360 Model 40 and the CP-40 operating system (1967), the IBM System/360 Model 67 and operating systems such as TSS/360 and CP/CMS (1967), the RCA 70/46 and the Time Sharing Operating System (1967), the GE 645 and Multics (1969), and the PDP-10 with added BBN-designed paging hardware and the TENEX operating system (1969).

Those machines, and subsequent machines supporting memory paging, use either a set of page address registers or in-memory page tables[d] to allow the processor to operate on arbitrary pages anywhere in RAM as a seemingly contiguous logical address space. These pages became the units exchanged between secondary storage[a] and RAM.

Page faults

edit

When a process tries to reference a page not currently mapped to a page frame in RAM, the processor treats this invalid memory reference as a page fault and transfers control from the program to the operating system. The operating system must:

  1. Determine whether a stolen page frame still contains an unmodified copy of the page; if so, use that page frame.
  2. Otherwise, obtain an empty page frame in RAM to use as a container for the data, and:
    • Determine whether the page was ever initialized
    • If so determine the location of the data on secondary storage[a].
    • Load the required data into the available page frame.
  3. Update the page table to refer to the new page frame.
  4. Return control to the program, transparently retrying the instruction that caused the page fault.

When all page frames are in use, the operating system must select a page frame to reuse for the page the program now needs. If the evicted page frame was dynamically allocated by a program to hold data, or if a program modified it since it was read into RAM (in other words, if it has become "dirty"), it must be written out to secondary storage before being freed. If a program later references the evicted page, another page fault occurs and the page must be read back into RAM.

The method the operating system uses to select the page frame to reuse, which is its page replacement algorithm, affects efficiency. The operating system predicts the page frame least likely to be needed soon, often through the least recently used (LRU) algorithm or an algorithm based on the program's working set. To further increase responsiveness, paging systems may predict which pages will be needed soon, preemptively loading them into RAM before a program references them, and may steal page frames from pages that have been unreferenced for a long time, making them available. Some systems clear new pages to avoid data leaks that compromise security; some set them to installation defined or random values to aid debugging.

Page fetching techniques

edit

Demand paging

edit

When pure demand paging is used, pages are loaded only when they are referenced. A program from a memory mapped file begins execution with none of its pages in RAM. As the program commits page faults, the operating system copies the needed pages from a file, e.g., memory-mapped file, paging file, or a swap partition containing the page data into RAM.

Anticipatory paging

edit

Some systems use only demand paging—waiting until a page is actually requested before loading it into RAM.

Other systems attempt to reduce latency by guessing which pages not in RAM are likely to be needed soon, and pre-loading such pages into RAM, before that page is requested. (This is often in combination with pre-cleaning, which guesses which pages currently in RAM are not likely to be needed soon, and pre-writing them out to storage).

When a page fault occurs, anticipatory paging systems will not only bring in the referenced page, but also other pages that are likely to be referenced soon. A simple anticipatory paging algorithm will bring in the next few consecutive pages even though they are not yet needed (a prediction using locality of reference); this is analogous to a prefetch input queue in a CPU. Swap prefetching will prefetch recently swapped-out pages if there are enough free pages for them.[9]

If a program ends, the operating system may delay freeing its pages, in case the user runs the same program again.

Some systems allow application hints; the application may request that a page be made available and continue without delay.

Page replacement techniques

edit

Free page queue, stealing, and reclamation

edit

The free page queue is a list of page frames that are available for assignment. Preventing this queue from being empty minimizes the computing necessary to service a page fault. Some operating systems periodically look for pages that have not been recently referenced and then free the page frame and add it to the free page queue, a process known as "page stealing". Some operating systems[e] support page reclamation; if a program commits a page fault by referencing a page that was stolen, the operating system detects this and restores the page frame without having to read the contents back into RAM.

Pre-cleaning

edit

The operating system may periodically pre-clean dirty pages: write modified pages back to secondary storage[a] even though they might be further modified. This minimizes the amount of cleaning needed to obtain new page frames at the moment a new program starts or a new data file is opened, and improves responsiveness. (Unix operating systems periodically use sync to pre-clean all dirty pages; Windows operating systems use "modified page writer" threads.)

Some systems allow application hints; the application may request that a page be cleared or paged out and continue without delay.

Thrashing

edit

After completing initialization, most programs operate on a small number of code and data pages compared to the total memory the program requires. The pages most frequently accessed are called the working set.

When the working set is a small percentage of the system's total number of pages, virtual memory systems work most efficiently and an insignificant amount of computing is spent resolving page faults. As the working set grows, resolving page faults remains manageable until the growth reaches a critical point. Then faults go up dramatically and the time spent resolving them overwhelms time spent on the computing the program was written to do. This condition is referred to as thrashing. Thrashing occurs on a program that works with huge data structures, as its large working set causes continual page faults that drastically slow down the system. Satisfying page faults may require freeing pages that will soon have to be re-read from secondary storage.[a] "Thrashing" is also used in contexts other than virtual memory systems; for example, to describe cache issues in computing or silly window syndrome in networking.

A worst case might occur on VAX processors. A single MOVL crossing a page boundary could have a source operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and a destination operand using a displacement deferred addressing mode, where the longword containing the operand address crosses a page boundary, and the source and destination could both cross page boundaries. This single instruction references ten pages; if not all are in RAM, each will cause a page fault. As each fault occurs the operating system needs to go through the extensive memory management routines perhaps causing multiple I/Os which might include writing other process pages to disk and reading pages of the active process from disk. If the operating system could not allocate ten pages to this program, then remedying the page fault would discard another page the instruction needs, and any restart of the instruction would fault again.

To decrease excessive paging and resolve thrashing problems, a user can increase the number of pages available per program, either by running fewer programs concurrently or increasing the amount of RAM in the computer.

Sharing

edit

In multi-programming or in a multi-user environment, many users may execute the same program, written so that its code and data are in separate pages. To minimize RAM use, all users share a single copy of the program. Each process's page table is set up so that the pages that address code point to the single shared copy, while the pages that address data point to different physical pages for each process.

Different programs might also use the same libraries. To save space, only one copy of the shared library is loaded into physical memory. Programs which use the same library have virtual addresses that map to the same pages (which contain the library's code and data). When programs want to modify the library's code, they use copy-on-write, so memory is only allocated when needed.

Shared memory is an efficient means of communication between programs. Programs can share pages in memory, and then write and read to exchange data.

Implementations

edit

Ferranti Atlas

edit

The first computer to support paging was the supercomputer Atlas,[10][11][12] jointly developed by Ferranti, the University of Manchester and Plessey in 1963. The machine had an associative (content-addressable) memory with one entry for each 512 word page. The Supervisor[13] handled non-equivalence interruptions[f] and managed the transfer of pages between core and drum in order to provide a one-level store[14] to programs.

Microsoft Windows

edit

Windows 3.x and Windows 9x

edit

Paging has been a feature of Microsoft Windows since Windows 3.0 in 1990. Windows 3.x creates a hidden file named 386SPART.PAR or WIN386.SWP for use as a swap file. It is generally found in the root directory, but it may appear elsewhere (typically in the WINDOWS directory). Its size depends on how much swap space the system has (a setting selected by the user under Control Panel → Enhanced under "Virtual Memory"). If the user moves or deletes this file, a blue screen will appear the next time Windows is started, with the error message "The permanent swap file is corrupt". The user will be prompted to choose whether or not to delete the file (even if it does not exist).

Windows 95, Windows 98 and Windows Me use a similar file, and the settings for it are located under Control Panel → System → Performance tab → Virtual Memory. Windows automatically sets the size of the page file to start at 1.5× the size of physical memory, and expand up to 3× physical memory if necessary. If a user runs memory-intensive applications on a system with low physical memory, it is preferable to manually set these sizes to a value higher than default.

Windows NT

edit

The file used for paging in the Windows NT family is pagefile.sys. The default location of the page file is in the root directory of the partition where Windows is installed. Windows can be configured to use free space on any available drives for page files. It is required, however, for the boot partition (i.e., the drive containing the Windows directory) to have a page file on it if the system is configured to write either kernel or full memory dumps after a Blue Screen of Death. Windows uses the paging file as temporary storage for the memory dump. When the system is rebooted, Windows copies the memory dump from the page file to a separate file and frees the space that was used in the page file.[15]

Fragmentation

edit

In the default configuration of Windows, the page file is allowed to expand beyond its initial allocation when necessary. If this happens gradually, it can become heavily fragmented which can potentially cause performance problems.[16] The common advice given to avoid this is to set a single "locked" page file size so that Windows will not expand it. However, the page file only expands when it has been filled, which, in its default configuration, is 150% of the total amount of physical memory.[17] Thus the total demand for page file-backed virtual memory must exceed 250% of the computer's physical memory before the page file will expand.

The fragmentation of the page file that occurs when it expands is temporary. As soon as the expanded regions are no longer in use (at the next reboot, if not sooner) the additional disk space allocations are freed and the page file is back to its original state.

Locking a page file size can be problematic if a Windows application requests more memory than the total size of physical memory and the page file, leading to failed requests to allocate memory that may cause applications and system processes to fail. Also, the page file is rarely read or written in sequential order, so the performance advantage of having a completely sequential page file is minimal. However, a large page file generally allows the use of memory-heavy applications, with no penalties besides using more disk space. While a fragmented page file may not be an issue by itself, fragmentation of a variable size page file will over time create several fragmented blocks on the drive, causing other files to become fragmented. For this reason, a fixed-size contiguous page file is better, providing that the size allocated is large enough to accommodate the needs of all applications.

The required disk space may be easily allocated on systems with more recent specifications (i.e. a system with 3 GB of memory having a 6 GB fixed-size page file on a 750 GB disk drive, or a system with 6 GB of memory and a 16 GB fixed-size page file and 2 TB of disk space). In both examples, the system uses about 0.8% of the disk space with the page file pre-extended to its maximum.

Defragmenting the page file is also occasionally recommended to improve performance when a Windows system is chronically using much more memory than its total physical memory.[18] This view ignores the fact that, aside from the temporary results of expansion, the page file does not become fragmented over time. In general, performance concerns related to page file access are much more effectively dealt with by adding more physical memory.

Unix and Unix-like systems

edit

Unix systems, and other Unix-like operating systems, use the term "swap" to describe the act of substituting disk space for RAM when physical RAM is full.[19] In some of those systems, it is common to dedicate an entire partition of a hard disk to swapping. These partitions are called swap partitions. Many systems have an entire hard drive dedicated to swapping, separate from the data drive(s), containing only a swap partition. A hard drive dedicated to swapping is called a "swap drive" or a "scratch drive" or a "scratch disk". Some of those systems only support swapping to a swap partition; others also support swapping to files.

Linux

edit

The Linux kernel supports a virtually unlimited number of swap backends (devices or files), and also supports assignment of backend priorities. When the kernel swaps pages out of physical memory, it uses the highest-priority backend with available free space. If multiple swap backends are assigned the same priority, they are used in a round-robin fashion (which is somewhat similar to RAID 0 storage layouts), providing improved performance as long as the underlying devices can be efficiently accessed in parallel.[20]

Swap files and partitions
edit

From the end-user perspective, swap files in versions 2.6.x and later of the Linux kernel are virtually as fast as swap partitions; the limitation is that swap files should be contiguously allocated on their underlying file systems. To increase performance of swap files, the kernel keeps a map of where they are placed on underlying devices and accesses them directly, thus bypassing the cache and avoiding filesystem overhead.[21][22] When residing on HDDs, which are rotational magnetic media devices, one benefit of using swap partitions is the ability to place them on contiguous HDD areas that provide higher data throughput or faster seek time. However, the administrative flexibility of swap files can outweigh certain advantages of swap partitions. For example, a swap file can be placed on any mounted file system, can be set to any desired size, and can be added or changed as needed. Swap partitions are not as flexible; they cannot be enlarged without using partitioning or volume management tools, which introduce various complexities and potential downtimes.

Swappiness
edit

Swappiness is a Linux kernel parameter that controls the relative weight given to swapping out of runtime memory, as opposed to dropping pages from the system page cache, whenever a memory allocation request cannot be met from free memory. Swappiness can be set to a value from 0 to 200.[23] A low value causes the kernel to prefer to evict pages from the page cache while a higher value causes the kernel to prefer to swap out "cold" memory pages. The default value is 60; setting it higher can cause high latency if cold pages need to be swapped back in (when interacting with a program that had been idle for example), while setting it lower (even 0) may cause high latency when files that had been evicted from the cache need to be read again, but will make interactive programs more responsive as they will be less likely to need to swap back cold pages. Swapping can also slow down HDDs further because it involves a lot of random writes, while SSDs do not have this problem. Certainly the default values work well in most workloads, but desktops and interactive systems for any expected task may want to lower the setting while batch processing and less interactive systems may want to increase it.[24]

Swap death
edit

When the system memory is highly insufficient for the current tasks and a large portion of memory activity goes through a slow swap, the system can become practically unable to execute any task, even if the CPU is idle. When every process is waiting on the swap, the system is considered to be in swap death.[25][26]

Swap death can happen due to incorrectly configured memory overcommitment.[27][28][29]

The original description of the "swapping to death" problem relates to the X server. If code or data used by the X server to respond to a keystroke is not in main memory, then if the user enters a keystroke, the server will take one or more page faults, requiring those pages to read from swap before the keystroke can be processed, slowing the response to it. If those pages do not remain in memory, they will have to be faulted in again to handle the next keystroke, making the system practically unresponsive even if it's actually executing other tasks normally.[30]

macOS

edit

macOS uses multiple swap files. The default (and Apple-recommended) installation places them on the root partition, though it is possible to place them instead on a separate partition or device.[31]

AmigaOS 4

edit

AmigaOS 4.0 introduced a new system for allocating RAM and defragmenting physical memory. It still uses flat shared address space that cannot be defragmented. It is based on slab allocation and paging memory that allows swapping. Paging was implemented in AmigaOS 4.1. It can lock up the system if all physical memory is used up.[32] Swap memory could be activated and deactivated, allowing the user to choose to use only physical RAM.

Performance

edit

The backing store for a virtual memory operating system is typically many orders of magnitude slower than RAM. Hard disks, for instance, introduce several milliseconds delay before the reading or writing begins. Therefore, it is desirable to reduce or eliminate swapping, where practical. Some operating systems offer settings to influence the kernel's decisions.

  • Linux offers the /proc/sys/vm/swappiness parameter, which changes the balance between swapping out runtime memory, as opposed to dropping pages from the system page cache.
  • Windows 2000, XP, and Vista offer the DisablePagingExecutive registry setting, which controls whether kernel-mode code and data can be eligible for paging out.
  • Mainframe computers frequently used head-per-track disk drives or drums for page and swap storage to eliminate seek time, and several technologies[33] to have multiple concurrent requests to the same device in order to reduce rotational latency.
  • Flash memory has a finite number of erase-write cycles (see limitations of flash memory), and the smallest amount of data that can be erased at once might be very large,[g] seldom coinciding with pagesize. Therefore, flash memory may wear out quickly if used as swap space under tight memory conditions. For this reason, mobile and embedded operating systems (such as Android) may not use swap space. On the attractive side, flash memory is practically delayless compared to hard disks, and not volatile as RAM chips. Schemes like ReadyBoost and Intel Turbo Memory are made to exploit these characteristics.

Many Unix-like operating systems (for example AIX, Linux, and Solaris) allow using multiple storage devices for swap space in parallel, to increase performance.

Swap space size

edit

In some older virtual memory operating systems, space in swap backing store is reserved when programs allocate memory for runtime data. Operating system vendors typically issue guidelines about how much swap space should be allocated.

Physical and virtual address space sizes

edit

Paging is one way of allowing the size of the addresses used by a process, which is the process's "virtual address space" or "logical address space", to be different from the amount of main memory actually installed on a particular computer, which is the physical address space.

Main memory smaller than virtual memory

edit

In most systems, the size of a process's virtual address space is much larger than the available main memory.[35] For example:

  • The address bus that connects the CPU to main memory may be limited. The i386SX CPU's 32-bit internal addresses can address 4 GB, but it has only 24 pins connected to the address bus, limiting installed physical memory to 16 MB. There may be other hardware restrictions on the maximum amount of RAM that can be installed.
  • The maximum memory might not be installed because of cost, because the model's standard configuration omits it, or because the buyer did not believe it would be advantageous.
  • Sometimes not all internal addresses can be used for memory anyway, because the hardware architecture may reserve large regions for I/O or other features.

Main memory the same size as virtual memory

edit

A computer with true n-bit addressing may have 2n addressable units of RAM installed. An example is a 32-bit x86 processor with 4 GB and without Physical Address Extension (PAE). In this case, the processor is able to address all the RAM installed and no more.

However, even in this case, paging can be used to support more virtual memory than physical memory. For instance, many programs may be running concurrently. Together, they may require more physical memory than can be installed on the system, but not all of it will have to be in RAM at once. A paging system makes efficient decisions on which memory to relegate to secondary storage, leading to the best use of the installed RAM.

In addition the operating system may provide services to programs that envision a larger memory, such as files that can grow beyond the limit of installed RAM. Not all of the file can be concurrently mapped into the address space of a process, but the operating system might allow regions of the file to be mapped into the address space, and unmapped if another region needs to be mapped in.

Main memory larger than virtual address space

edit

A few computers have a main memory larger than the virtual address space of a process, such as the Magic-1,[35] some PDP-11 machines, and some systems using 32-bit x86 processors with Physical Address Extension. This nullifies a significant advantage of paging, since a single process cannot use more main memory than the amount of its virtual address space. Such systems often use paging techniques to obtain secondary benefits:

  • The "extra memory" can be used in the page cache to cache frequently used files and metadata, such as directory information, from secondary storage.
  • If the processor and operating system support multiple virtual address spaces, the "extra memory" can be used to run more processes. Paging allows the cumulative total of virtual address spaces to exceed physical main memory.
  • A process can store data in memory-mapped files on memory-backed file systems, such as the tmpfs file system or file systems on a RAM drive, and map files into and out of the address space as needed.
  • A set of processes may still depend upon the enhanced security features page-based isolation may bring to a multitasking environment.

The size of the cumulative total of virtual address spaces is still limited by the amount of secondary storage available.

See also

edit

Notes

edit
  1. ^ a b c d e f Initially drums, and then hard disk drives and solid-state drives have been used for overlays and paging.
  2. ^ E.g., Multics, OS/VS1, OS/VS2, VM/370
  3. ^ E.g.,z/OS.
  4. ^ Some systems have a global page table, some systems have a separate page table for each process, some systems have a separate page table for each segment[b] and some systems have cascaded page tables.[c]
  5. ^ For example, MVS (Multiple Virtual Storage).
  6. ^ A non-equivalence interruption occurs when the high order bits of an address do not match any entry in the associative memory.
  7. ^ 128 KiB for an Intel X25-M SSD[34]

References

edit
  1. ^ Operating System Concepts, 10th Edition. February 2021. 9.3 Paging. ISBN 978-1-119-80036-1.
  2. ^ "Paging in Operating System". GeeksforGeeks. Retrieved 2025-08-07.
  3. ^ Belzer, Jack; Holzman, Albert G.; Kent, Allen, eds. (1981). "Operating systems". Encyclopedia of computer science and technology. Vol. 11. CRC Press. p. 442. ISBN 0-8247-2261-2. Archived from the original on 2025-08-07.
  4. ^ Cragon, Harvey G. (1996). Memory Systems and Pipelined Processors. Jones and Bartlett Publishers. p. 109. ISBN 0-86720-474-5. Archived from the original on 2025-08-07.
  5. ^ Belzer, Jack; Holzman, Albert G.; Kent, Allen, eds. (1981). "Virtual memory systems". Encyclopedia of computer science and technology. Vol. 14. CRC Press. p. 32. ISBN 0-8247-2214-0. Archived from the original on 2025-08-07.
  6. ^ Kilburn, T; Payne, R B; Howarth, D J (1962). "The Atlas Supervisor".
  7. ^ R. W. O'Neill. Experience using a time sharing multiprogramming system with dynamic address relocation hardware. Proc. AFIPS Computer Conference 30 (Spring Joint Computer Conference, 1967). pp. 611–621. doi:10.1145/1465482.1465581.
  8. ^ Scientific Data Systems Reference Manual, SDS 940 Computer (PDF). 1966. pp. 8–9.
  9. ^ "Swap prefetching". Linux Weekly News. 2025-08-07.
  10. ^ Sumner, F. H.; Haley, G.; Chenh, E. C. Y. (1962). "The Central Control Unit of the 'Atlas' Computer". Information Processing 1962. IFIP Congress Proceedings. Vol. Proceedings of IFIP Congress 62. Spartan.
  11. ^ "The Atlas". University of Manchester: Department of Computer Science. Archived from the original on 2025-08-07.
  12. ^ "Atlas Architecture". Atlas Computer. Chilton: Atlas Computer Laboratory. Archived from the original on 2025-08-07.
  13. ^ Kilburn, T.; Payne, R. B.; Howarth, D. J. (December 1961). "The Atlas Supervisor". Computers - Key to Total Systems Control. Conferences Proceedings. Vol. 20, Proceedings of the Eastern Joint Computer Conference Washington, D.C. Macmillan. pp. 279–294. Archived from the original on 2025-08-07.
  14. ^ Kilburn, T.; Edwards, D. B. G.; Lanigan, M. J.; Sumner, F. H. (April 1962). "One-Level Storage System". IRE Transactions on Electronic Computers (2). Institute of Radio Engineers: 223–235. doi:10.1109/TEC.1962.5219356.
  15. ^ Tsigkogiannis, Ilias (2025-08-07). "Crash Dump Analysis". driver writing != bus driving. Microsoft. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  16. ^ "Windows Sysinternals PageDefrag". Sysinternals. Microsoft. 2025-08-07. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  17. ^ "Page File Information". Oingo KPT. Retrieved 2025-08-07.
  18. ^ "What Does Defragging Do?". HP Tech Takes. Hewlett-Packard. Retrieved 2025-08-07.
  19. ^ Both, David (2025-08-07). "An introduction to swap space on Linux systems". Opensource.com. Retrieved 2025-08-07.
  20. ^ "swapon(2) – Linux man page". Linux.Die.net. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  21. ^ ""Jesper Juhl": Re: How to send a break? - dump from frozen 64bit linux". LKML. 2025-08-07. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  22. ^ "Andrew Morton: Re: Swap partition vs swap file". LKML. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  23. ^ "The Linux Kernel Documentation for /proc/sys/vm/".
  24. ^ Andrews, Jeremy (2025-08-07). "Linux: Tuning Swappiness". kerneltrap.org. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  25. ^ Rik van Riel (2025-08-07). "swap death (as in 2.1.91) and page tables". Archived from the original on 2025-08-07.
  26. ^ Kyle Rankin (2012). DevOps Troubleshooting: Linux Server Best Practices. Addison-Wesley. p. 159. ISBN 978-0-13-303550-6. Archived from the original on 2025-08-07.
  27. ^ Andries Brouwer. "The Linux kernel: Memory". Archived from the original on 2025-08-07.
  28. ^ Red Hat. "Capacity Tuning". Archived from the original on 2025-08-07.
  29. ^ "Memory overcommit settings". 2025-08-07. Archived from the original on 2025-08-07.
  30. ^ Peter MacDonald (2025-08-07). "swapping to death". Archived from the original on 2025-08-07.
  31. ^ John Siracusa (2025-08-07). "Mac OS X 10.1". Ars Technica. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  32. ^ AmigaOS Core Developer (2025-08-07). "Re: Swap issue also on Update 4 ?". Hyperion Entertainment. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  33. ^ E.g., Rotational Position Sensing on a Block Multiplexor channel
  34. ^ "Aligning filesystems to an SSD's erase block size | Thoughts by Ted". Thunk.org. 2025-08-07. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  35. ^ a b Bill Buzbee. "Magic-1 Minix Demand Paging Design". Archived from the original on 2025-08-07. Retrieved 2025-08-07.
edit
肛裂出血用什么药 汪星人什么意思 北宋六贼为什么没高俅 五劳七伤指的是什么 粘米粉是什么米做的
小孩嗓子哑了吃什么药 讥讽的笑是什么笑 的近义词是什么 什么好像什么 神态自若是什么意思
减肥可以吃什么主食 海肠是什么动物 最大的恐龙是什么恐龙 个体户是什么职业 态度是什么
谷氨酸高是什么原因 lg是什么牌子 优势是什么意思 无花果什么时候结果 女性内分泌失调有什么症状
肠绞痛什么原因引起的jiuxinfghf.com 七月七是什么节hcv8jop1ns0r.cn 牛筋草用什么除草剂hcv7jop5ns5r.cn 尿酸高适合吃什么食物hcv8jop7ns7r.cn 为什么手指会脱皮hcv8jop5ns1r.cn
什么叫湿热hcv8jop4ns8r.cn 灯五行属什么ff14chat.com 咽喉炎 吃什么clwhiglsz.com 654-2是什么药hcv7jop7ns2r.cn 爆菊花是什么意思hcv9jop6ns2r.cn
砂仁是什么hcv8jop7ns9r.cn 乳腺腺病是什么意思hcv7jop5ns6r.cn 健康证都检查什么项目clwhiglsz.com 唐朝灭亡后是什么朝代hcv8jop3ns9r.cn 二月花是什么花hcv9jop3ns0r.cn
孩子手抖是什么原因hcv7jop4ns8r.cn 真菌怕什么消毒液hcv8jop9ns9r.cn 检查尿常规挂什么科hcv8jop8ns9r.cn 横空出世什么意思hcv9jop3ns8r.cn 梦见塌方是什么预兆hcv8jop1ns1r.cn
百度