Searched hist:157 (Results 1 - 10 of 10) sorted by path

/freebsd-10.0-release/lib/msun/
H A DSymbol.mapdiff 238722 Mon Jul 23 17:24:05 MDT 2012 kargl Compute the exponential of x for Intel 80-bit format and IEEE 128-bit
format. These implementations are based on

PTP Tang, "Table-driven implementation of the exponential function
in IEEE floating-point arithmetic," ACM Trans. Math. Soft., 15,
144-157 (1989).

PR: standards/152415
Submitted by: kargl
Reviewed by: bde, das
Approved by: das (mentor)
/freebsd-10.0-release/lib/msun/ld128/
H A Ds_expl.c238722 Mon Jul 23 17:24:05 MDT 2012 kargl Compute the exponential of x for Intel 80-bit format and IEEE 128-bit
format. These implementations are based on

PTP Tang, "Table-driven implementation of the exponential function
in IEEE floating-point arithmetic," ACM Trans. Math. Soft., 15,
144-157 (1989).

PR: standards/152415
Submitted by: kargl
Reviewed by: bde, das
Approved by: das (mentor)
/freebsd-10.0-release/lib/msun/ld80/
H A Ds_expl.c238722 Mon Jul 23 17:24:05 MDT 2012 kargl Compute the exponential of x for Intel 80-bit format and IEEE 128-bit
format. These implementations are based on

PTP Tang, "Table-driven implementation of the exponential function
in IEEE floating-point arithmetic," ACM Trans. Math. Soft., 15,
144-157 (1989).

PR: standards/152415
Submitted by: kargl
Reviewed by: bde, das
Approved by: das (mentor)
/freebsd-10.0-release/lib/msun/man/
H A Dexp.3diff 238722 Mon Jul 23 17:24:05 MDT 2012 kargl Compute the exponential of x for Intel 80-bit format and IEEE 128-bit
format. These implementations are based on

PTP Tang, "Table-driven implementation of the exponential function
in IEEE floating-point arithmetic," ACM Trans. Math. Soft., 15,
144-157 (1989).

PR: standards/152415
Submitted by: kargl
Reviewed by: bde, das
Approved by: das (mentor)
/freebsd-10.0-release/lib/msun/src/
H A De_exp.cdiff 238722 Mon Jul 23 17:24:05 MDT 2012 kargl Compute the exponential of x for Intel 80-bit format and IEEE 128-bit
format. These implementations are based on

PTP Tang, "Table-driven implementation of the exponential function
in IEEE floating-point arithmetic," ACM Trans. Math. Soft., 15,
144-157 (1989).

PR: standards/152415
Submitted by: kargl
Reviewed by: bde, das
Approved by: das (mentor)
H A Dmath.hdiff 238722 Mon Jul 23 17:24:05 MDT 2012 kargl Compute the exponential of x for Intel 80-bit format and IEEE 128-bit
format. These implementations are based on

PTP Tang, "Table-driven implementation of the exponential function
in IEEE floating-point arithmetic," ACM Trans. Math. Soft., 15,
144-157 (1989).

PR: standards/152415
Submitted by: kargl
Reviewed by: bde, das
Approved by: das (mentor)
H A Dmath_private.hdiff 238722 Mon Jul 23 17:24:05 MDT 2012 kargl Compute the exponential of x for Intel 80-bit format and IEEE 128-bit
format. These implementations are based on

PTP Tang, "Table-driven implementation of the exponential function
in IEEE floating-point arithmetic," ACM Trans. Math. Soft., 15,
144-157 (1989).

PR: standards/152415
Submitted by: kargl
Reviewed by: bde, das
Approved by: das (mentor)
/freebsd-10.0-release/sys/i386/i386/
H A Dpmap.cdiff 158060 Wed Apr 26 19:49:20 MDT 2006 peter MFamd64: shrink pv entries from 24 bytes to about 12 bytes. (336 pv entries
per page = effectively 12.19 bytes per pv entry after overheads).
Instead of using a shared UMA zone for 24 byte pv entries (two 8-byte tailq
nodes, a 4 byte pointer, and a 4 byte address), we allocate a page at a
time per process. This provides 336 pv entries per process (actually, per
pmap address space) and eliminates one of the 8-byte tailq entries since
we now can track per-process pv entries implicitly. The pointer to
the pmap can be eliminated by doing address arithmetic to find the metadata
on the page headers to find a single pointer shared by all 336 entries.
There is an 11-int bitmap for the freelist of those 336 entries.

This is mostly a mechanical conversion from amd64, except:
* i386 has to allocate kvm and map the pages, amd64 has them outside of kvm
* native word size is smaller, so bitmaps etc become 32 bit instead of 64
* no dump_add_page() etc stuff because they are in kvm always.
* various pmap internals tweaks because pmap uses direct map on amd64 but
on i386 it has to use sched_pin and temporary mappings.

Also, sysctl vm.pmap.pv_entry_max and vm.pmap.shpgperproc are now
dynamic sysctls. Like on amd64, i386 can now tune the pv entry limits
without a recompile or reboot.

This is important because of the following scenario. If you have a 1GB
file (262144 pages) mmap()ed into 50 processes, that requires 13 million
pv entries. At 24 bytes per pv entry, that is 314MB of ram and kvm, while
at 12 bytes it is 157MB. A 157MB saving is significant.

Test-run by: scottl (Thanks!)
diff 158060 Wed Apr 26 19:49:20 MDT 2006 peter MFamd64: shrink pv entries from 24 bytes to about 12 bytes. (336 pv entries
per page = effectively 12.19 bytes per pv entry after overheads).
Instead of using a shared UMA zone for 24 byte pv entries (two 8-byte tailq
nodes, a 4 byte pointer, and a 4 byte address), we allocate a page at a
time per process. This provides 336 pv entries per process (actually, per
pmap address space) and eliminates one of the 8-byte tailq entries since
we now can track per-process pv entries implicitly. The pointer to
the pmap can be eliminated by doing address arithmetic to find the metadata
on the page headers to find a single pointer shared by all 336 entries.
There is an 11-int bitmap for the freelist of those 336 entries.

This is mostly a mechanical conversion from amd64, except:
* i386 has to allocate kvm and map the pages, amd64 has them outside of kvm
* native word size is smaller, so bitmaps etc become 32 bit instead of 64
* no dump_add_page() etc stuff because they are in kvm always.
* various pmap internals tweaks because pmap uses direct map on amd64 but
on i386 it has to use sched_pin and temporary mappings.

Also, sysctl vm.pmap.pv_entry_max and vm.pmap.shpgperproc are now
dynamic sysctls. Like on amd64, i386 can now tune the pv entry limits
without a recompile or reboot.

This is important because of the following scenario. If you have a 1GB
file (262144 pages) mmap()ed into 50 processes, that requires 13 million
pv entries. At 24 bytes per pv entry, that is 314MB of ram and kvm, while
at 12 bytes it is 157MB. A 157MB saving is significant.

Test-run by: scottl (Thanks!)
/freebsd-10.0-release/sys/i386/include/
H A Dpmap.hdiff 158060 Wed Apr 26 19:49:20 MDT 2006 peter MFamd64: shrink pv entries from 24 bytes to about 12 bytes. (336 pv entries
per page = effectively 12.19 bytes per pv entry after overheads).
Instead of using a shared UMA zone for 24 byte pv entries (two 8-byte tailq
nodes, a 4 byte pointer, and a 4 byte address), we allocate a page at a
time per process. This provides 336 pv entries per process (actually, per
pmap address space) and eliminates one of the 8-byte tailq entries since
we now can track per-process pv entries implicitly. The pointer to
the pmap can be eliminated by doing address arithmetic to find the metadata
on the page headers to find a single pointer shared by all 336 entries.
There is an 11-int bitmap for the freelist of those 336 entries.

This is mostly a mechanical conversion from amd64, except:
* i386 has to allocate kvm and map the pages, amd64 has them outside of kvm
* native word size is smaller, so bitmaps etc become 32 bit instead of 64
* no dump_add_page() etc stuff because they are in kvm always.
* various pmap internals tweaks because pmap uses direct map on amd64 but
on i386 it has to use sched_pin and temporary mappings.

Also, sysctl vm.pmap.pv_entry_max and vm.pmap.shpgperproc are now
dynamic sysctls. Like on amd64, i386 can now tune the pv entry limits
without a recompile or reboot.

This is important because of the following scenario. If you have a 1GB
file (262144 pages) mmap()ed into 50 processes, that requires 13 million
pv entries. At 24 bytes per pv entry, that is 314MB of ram and kvm, while
at 12 bytes it is 157MB. A 157MB saving is significant.

Test-run by: scottl (Thanks!)
diff 158060 Wed Apr 26 19:49:20 MDT 2006 peter MFamd64: shrink pv entries from 24 bytes to about 12 bytes. (336 pv entries
per page = effectively 12.19 bytes per pv entry after overheads).
Instead of using a shared UMA zone for 24 byte pv entries (two 8-byte tailq
nodes, a 4 byte pointer, and a 4 byte address), we allocate a page at a
time per process. This provides 336 pv entries per process (actually, per
pmap address space) and eliminates one of the 8-byte tailq entries since
we now can track per-process pv entries implicitly. The pointer to
the pmap can be eliminated by doing address arithmetic to find the metadata
on the page headers to find a single pointer shared by all 336 entries.
There is an 11-int bitmap for the freelist of those 336 entries.

This is mostly a mechanical conversion from amd64, except:
* i386 has to allocate kvm and map the pages, amd64 has them outside of kvm
* native word size is smaller, so bitmaps etc become 32 bit instead of 64
* no dump_add_page() etc stuff because they are in kvm always.
* various pmap internals tweaks because pmap uses direct map on amd64 but
on i386 it has to use sched_pin and temporary mappings.

Also, sysctl vm.pmap.pv_entry_max and vm.pmap.shpgperproc are now
dynamic sysctls. Like on amd64, i386 can now tune the pv entry limits
without a recompile or reboot.

This is important because of the following scenario. If you have a 1GB
file (262144 pages) mmap()ed into 50 processes, that requires 13 million
pv entries. At 24 bytes per pv entry, that is 314MB of ram and kvm, while
at 12 bytes it is 157MB. A 157MB saving is significant.

Test-run by: scottl (Thanks!)
/freebsd-10.0-release/sys/kern/
H A Dvfs_cluster.cdiff 211126 Mon Aug 09 21:09:20 MDT 2010 ivoras Bumping the read-ahead count once more, to value equivalent to 512 KiB on
most system, based on benchmark results on a low-end fibre channel SAN
under VMWare:

vfs.read_max read performance
8 (historical default) 83 MB/s
16 (recent bump) 131 MB/s
32 (this version) 152 MB/s
64 157 MB/s

(results are +/- 3 MB/s)

As read-ahead is heuristic, based on past IO requests, it shouldn't be
problematic. The new default is still smaller then in other OSes.

Completed in 336 milliseconds