Lines Matching refs:it

600 		// It also makes sense to move it from the inactive to the active, since
601 // otherwise the page daemon wouldn't come to keep track of it (in idle
602 // mode) -- if the page isn't touched, it will be deactivated after a
664 area, it is deleted. If it covers the beginning or the end, the area is
666 area, it is split in two; in this case the second area is returned via
700 // If no one else uses the area's cache and it's an anonymous cache, we can
701 // resize or split it, too.
856 // Don't unlock the cache yet because we might have to resize it
891 // Now we can unlock it.
935 for (VMCachePagesTree::Iterator it
937 vm_page* page = it.Next();) {
967 for (VMAddressSpace::AreaRangeIterator it
969 VMArea* area = it.Next();) {
980 for (VMAddressSpace::AreaRangeIterator it
982 VMArea* area = it.Next();) {
989 // can't do anything about it.
1003 // If someone else uses the area's cache or it's not an anonymous cache, we
1030 for (VMAddressSpace::AreaRangeIterator it
1032 VMArea* area = it.Next();) {
1135 // temporarily unlock the current cache since it might be mapped to
1180 // We created this cache, so we must delete it again. Note, that we
1182 // deadlock, since VMCache::_RemoveConsumer() will try to lock it, too.
1221 When it has to wait, the function calls \c Unlock() on both \a locker1
1226 If the function does not have to wait it does not modify or unlock any
1277 for (VMAddressSpace::AreaRangeIterator it
1279 VMArea* area = it.Next();) {
1597 // space (if it is the kernel address space that is), the low memory handler
1663 // if it's a stack, make sure that two pages are available at least
2119 for (VMCachePagesTree::Iterator it
2121 vm_page* page = it.Next();) {
2149 // make it into the mapped copy -- this will need quite some changes
2205 // get the vnode for the object, this also grabs a ref to it
2249 // unmapped, ensure it is not wired.
2286 // prefetch stuff, and also, probably don't trigger it at this place.
2355 // Check whether the source area exists and is cloneable. If so, mark it
2419 // to the source cache - but otherwise it has no idea that we need
2465 for (VMCachePagesTree::Iterator it = cache->pages.GetIterator();
2466 vm_page* page = it.Next();) {
2523 // one referencing it (besides us currently holding a second reference),
2632 for (VMCachePagesTree::Iterator it = lowerCache->pages.GetIterator();
2633 vm_page* page = it.Next();) {
2657 // The area must be readable in the same way it was
2685 for (VMCachePagesTree::Iterator it = lowerCache->pages.GetIterator();
2686 vm_page* page = it.Next();) {
2690 // The area must be readable in the same way it was
2705 // The area must be readable in the same way it was previously
2860 // If the source area is writable, we need to move it one layer up as well
3027 for (VMCachePagesTree::Iterator it = cache->pages.GetIterator();
3028 vm_page* page = it.Next();) {
3169 encountering one that has been accessed. From then on it will continue to
3387 for (VMCache::ConsumerList::Iterator it = cache->consumers.GetIterator();
3388 VMCache* consumer = it.Next();) {
3448 for (VMCache::ConsumerList::Iterator it = cache->consumers.GetIterator();
3449 VMCache* consumer = it.Next();) {
3512 for (VMCache::ConsumerList::Iterator it = cache->consumers.GetIterator();
3513 VMCache* consumer = it.Next();) {
3714 VMAreasTree::Iterator it = VMAreas::GetIterator();
3715 while ((area = it.Next()) != NULL) {
3751 VMAreasTree::Iterator it = VMAreas::GetIterator();
3752 while ((area = it.Next()) != NULL) {
4016 for (VMAddressSpace::AreaIterator it
4018 VMArea* area = it.Next();) {
4149 // If the address is no kernel address, we just skip it. The
4150 // architecture specific code has to deal with it.
4353 // map in the new heap and initialize it
4505 // exists, it isn't that hard to find all of the ones we need to create
4642 // send it the signal. Otherwise we notify the user debugger
4743 // page must be busy -- wait for it to become unbusy
4758 // see if the backing store has it
4760 // insert a fresh page and mark it busy -- we're going to read it in
4826 // object so we need to copy it and stick it into the top cache.
4829 // TODO: If memory is low, it might be a good idea to steal the page
4831 FTRACE(("get new page, copy it, and put it into the topmost cache\n"));
4836 // the source page doesn't disappear, we mark it busy.
4945 // We have the area, it was a valid access, so let's try to resolve the
4981 // it's mapped in read-only, so that we cannot overwrite someone else's
5000 // Yep there's already a page. If it's ours, we can simply adjust
5001 // its protection. Otherwise we have to unmap it.
5006 // to make sure it isn't wired.
5015 // If the page is wired, we can't unmap it. Wait until it is unwired
5017 // writing, since it it isn't in the topmost cache. So we can safely
5026 // ... but since we allocated a page and inserted it into
5027 // the top cache, remove and free it first. Otherwise we'd
5029 // cache has a page that would shadow it.
5044 // is as follows: Since the page is mapped, it must live in the top
5049 // must have found it and therefore it cannot be busy either.
5408 // Okay, looks good so far, so let's do it
5416 // Growing the cache can fail, so we do it first.
5469 // shrinking the cache can't fail, so we do it now
5527 and copies from/to it directly.
5583 // Page not found in this cache -- if it is paged out, we must not try
5584 // to get it from lower caches.
5771 // wired the area itself, nothing disturbing will happen with it
5951 // wired the area itself, nothing disturbing will happen with it
5977 // to it when reverting what we've done so far.
6265 // if it's only one entry, we will silently accept the missing ending
6385 // Now we can reset the protection to whatever it was before.
6714 // will be restricted in the future, and so it will.
6881 // The whole area is covered: let set_area_protection handle it.
6953 // requested, we have to unmap it. Otherwise we can re-map it with
7037 // Especially when a lot has to be written, it might take ages
7038 // until it really hits the disk.
7046 // NOTE: If I understand it correctly the purpose of MS_INVALIDATE is to
7228 /*! The physical_entry structure has changed. We need to translate it to the