#
bb2808d6 |
|
24-Oct-2023 |
Augustin Cavalier <waddlesplash@gmail.com> |
bootloader: Implement TSC calibration via hypervisor CPUID leaf. While debugging some problems on the HaikuPorts build VMs, mmlr noticed their clocks had an alarming amount of drift. This prompted an investigation into TSC calibration mechanisms, and the discovery that there is a VM-specific one which we did not implement. This mechanism is more accurate than counting cycles on VMs where cycles can be "stolen" (the probable cause of the aforementioned clock drift.) Tested in VMware (works out of the box) and on QEMU/KVM (may need TSC frequency specified or a host with invariant TSC.) Change-Id: I4ccfdb2e4e2621404ec9026e7106c02bf96faf18 Reviewed-on: https://review.haiku-os.org/c/haiku/+/7063 Reviewed-by: waddlesplash <waddlesplash@gmail.com>
|
#
ef611e96 |
|
10-Aug-2023 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: also write the cpu number when the rdpid instruction is available Change-Id: I5b37fe8aff9b4cf12fbd4dd60a91eb09f11f4e2b Reviewed-on: https://review.haiku-os.org/c/haiku/+/6807 Reviewed-by: waddlesplash <waddlesplash@gmail.com>
|
#
67ee1d1a |
|
10-Aug-2023 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: don't load microcode on intel when already up to date on Pentium Silver, loading the microcode on the boot cpu also updates the other ones. Change-Id: Ifbd767e7d73fdbc8ae2bf0740fcce523e500de1b Reviewed-on: https://review.haiku-os.org/c/haiku/+/6806 Reviewed-by: waddlesplash <waddlesplash@gmail.com>
|
#
37223744 |
|
10-Jan-2023 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: add a hybrid type per cpu, to be dumped when the feature exists. for AlderLake CPUs Change-Id: I4beba04e3ac95d7564684ee86de99c894b57a15c Reviewed-on: https://review.haiku-os.org/c/haiku/+/5988 Reviewed-by: waddlesplash <waddlesplash@gmail.com> Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
|
#
fb69e061 |
|
29-Nov-2022 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel: load cpu amd microcode update if loaded by the bootloader we detect basically the cpu info before loading the microcode, to be able to detect the vendor, and avoid any update on hypervisor. I couldn't test because my cpu doesn't have any update available. Change-Id: I6aea830158423b3ee13b640be8a788fc9041e23c Reviewed-on: https://review.haiku-os.org/c/haiku/+/5859 Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org> Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
|
#
e8deaebd |
|
25-Nov-2022 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: dump the CPPC feature Change-Id: Ic4d286108520147defd6251a15943a14ab96e264 Reviewed-on: https://review.haiku-os.org/c/haiku/+/5829 Reviewed-by: waddlesplash <waddlesplash@gmail.com>
|
#
cedd8555 |
|
05-Nov-2022 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: init the tsc frequency and clock speed from MSR when available only for AMD newer CPUs. tested on R5300U Change-Id: I44be2efca37b1738a759a15140e5fd8d3b5ac7b0 Reviewed-on: https://review.haiku-os.org/c/haiku/+/5804 Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
|
#
fbca1c40 |
|
23-Sep-2022 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86_64: configure LFENCE as a serializing instruction on AMD Change-Id: I152bf41c3479f81fc458abdf8d89874ffa3a08d7 Reviewed-on: https://review.haiku-os.org/c/haiku/+/5691 Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
|
#
2e69f2b0 |
|
23-Sep-2022 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: init the tsc frequency and clock speed from CPUID when available only for Intel newer CPUs. Change-Id: Icd83f3b643796bfb3725b5c8877b9e7828bc71d9 Reviewed-on: https://review.haiku-os.org/c/haiku/+/5688 Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk> Reviewed-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com> Reviewed-by: Alex von Gluck IV <kallisti5@unixzen.com> Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org>
|
#
4106e3f1 |
|
18-Sep-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: rework get_frequency_for we don't sample if the last sample is too recent and use the cached result. Change-Id: I17ed29bda7fe7276f1a4148b3e1985c9d32ae032 Reviewed-on: https://review.haiku-os.org/c/haiku/+/4101 Reviewed-by: Jérôme Duval <jerome.duval@gmail.com> Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
|
#
cdd2ed5a |
|
10-Nov-2021 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: CPUID leaf 0x1f is a preferred superset to leaf 0x0b Intel recommends first checking for the existence of CPUID leaf 1FH before using leaf 0BH. Change-Id: Iba677186521e086fa06bcc4fe42eaed4ba030e6d Reviewed-on: https://review.haiku-os.org/c/haiku/+/4719 Tested-by: Commit checker robot <no-reply+buildbot@haiku-os.org> Reviewed-by: waddlesplash <waddlesplash@gmail.com>
|
#
102bf4b7 |
|
02-Nov-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86_64: fix build
|
#
18112d73 |
|
31-Oct-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86_64: use standard xstate length for sse/avx when found invalid Change-Id: I1c93e5dd8de80bf155eabb55c77119349a7186ab Reviewed-on: https://review.haiku-os.org/c/haiku/+/3372 Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
|
#
011fd524 |
|
26-Sep-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: dump features 7 ecx Change-Id: I4c166ceb64c3a472ee2a849beca6ee041ef3af89 Reviewed-on: https://review.haiku-os.org/c/haiku/+/3279 Reviewed-by: Rene Gollent <rene@gollent.com>
|
#
357b9d3c |
|
05-Sep-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
x86: identify Hygon vendor it's a Zen-based CPU: rely on AMD support code. Change-Id: Ia980a42457575bf8d1130d813310a285bf137691 Reviewed-on: https://review.haiku-os.org/c/haiku/+/3217 Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
|
#
eb7ac342 |
|
05-Sep-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: detect power subfeatures Change-Id: Id159f0d7fc7816b6a40b9cf28f53dfdbebd04a73 Reviewed-on: https://review.haiku-os.org/c/haiku/+/3211 Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
|
#
4df4ae2e |
|
17-Aug-2020 |
Michael Lotz <mmlr@mlotz.ch> |
kernel/x86: Enable machine check exceptions if supported. This enables generation of exceptions that are due to uncorrected hardware errors. The exception handlers were already in place and will now actually trigger kernel panics. Note that this is the simplest form of MCE "handling" and does not add anything of the broader machine check architecture (MCA) that also allow reporting of corrected errors. As MCEs are generally hard to decode due to their hardware specifity, this merely makes such problems more obvious. Might help to discern hardware issues in cases that would otherwise just triple fault and cause a reboot. Change-Id: I9e3a2640458f7c562066478d0ca90e3a46c3a325 Reviewed-on: https://review.haiku-os.org/c/haiku/+/3155 Reviewed-by: waddlesplash <waddlesplash@gmail.com> Reviewed-by: Axel Dörfler <axeld@pinc-software.de>
|
#
94951269 |
|
05-May-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86_64: AVX support xsave or xsavec are supported. breaks vregs compatibility. change the thread structure object cache alignment to 64 the xsave fpu_state size isn't defined, it is for instance 832 here, thus I picked 1024. Change-Id: I4a0cab0bc42c1d37f24dcafb8259f8ff24a330d2 Reviewed-on: https://review.haiku-os.org/c/haiku/+/2849 Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
|
#
c74c3473 |
|
08-May-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: detect xsave subfeatures Change-Id: Ida635441faaea4fb060e9f77ca3f4f167dc4bfe4 Reviewed-on: https://review.haiku-os.org/c/haiku/+/2617 Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
|
#
56bb1bd5 |
|
20-Feb-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel: load cpu microcode update if loaded by the bootloader add optional fields for microcode in kernel_args. Change-Id: Ic5fb54cf6c9f489a2d1cdda00f63980c11dcdaeb Reviewed-on: https://review.haiku-os.org/c/haiku/+/2264 Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
|
#
073e295a |
|
16-Feb-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel/x86: stores cpu number in TSC_AUX if rdtscp is available On modern x86, one can use __rdtscp to get the current cpu in userland. Change-Id: I1767e379606230a75e4622637c7a5aed9cdf9ab0 Reviewed-on: https://review.haiku-os.org/c/haiku/+/2248 Reviewed-by: Adrien Destugues <pulkomandy@gmail.com>
|
#
1a836b9e |
|
12-Feb-2020 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel: x86: add some more cpuid flags. Change-Id: If81c8e38c4e5a8347b5818440a7516298be585bc Reviewed-on: https://review.haiku-os.org/c/haiku/+/2242 Reviewed-by: Jérôme Duval <jerome.duval@gmail.com>
|
#
26e0b0c8 |
|
24-Aug-2019 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel/x86_64: Add errata patching. The patched errata are only the AMD ones FreeBSD patches (it seems there are no Intel errata that can be patched this way, they are all in microcode updates ... or can't be patched in the CPU at all.) This also seems to be roughly the point in the boot that FreeBSD patches these, too, despite how "critical" some of them seem. Change-Id: I9065f8d025332418a21c2cdf39afd7d29405edcc Reviewed-on: https://review.haiku-os.org/c/haiku/+/1740 Reviewed-by: Jessica Hamilton <jessica.l.hamilton@gmail.com>
|
#
96bc0f46 |
|
15-Jan-2019 |
Augustin Cavalier <waddlesplash@gmail.com> |
kernel/x86: Fix some warnings.
|
#
6086986d |
|
04-Jan-2019 |
Rob Gill <rrobgill@protonmail.com> |
kernel/x86: additional msr and cpuid items Adds SSBD and L1TF related items Change-Id: Iccea2bb9e057e0d011a18609212f175f9b5e678d Reviewed-on: https://review.haiku-os.org/825 Reviewed-by: Adrien Destugues <pulkomandy@pulkomandy.tk>
|
#
9dd4d2dd |
|
03-Jan-2018 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel: support for Intel SMAP and SMEP on x86_64. SMAP will generated page faults when the kernel tries to access user pages unless overriden. If SMAP is enabled, the override instructions are written where needed in memory with binary "altcodepatches". Support is enabled by default, might be disabled per safemode setting. Change-Id: Ife26cd765056aeaf65b2ffa3cadd0dcf4e273a96
|
#
483c4584 |
|
15-Jan-2018 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel: x86: add some more cpuid flags.
|
#
94090214 |
|
21-Dec-2017 |
Jérôme Duval <jerome.duval@gmail.com> |
kernel: x86: add cpuid feature 7 flags.
|
#
1446507c |
|
31-Dec-2017 |
Fredrik Holmqvist <fredrik.holmqvist@gmail.com> |
Remove the code to force ACPI to shutdown on CPU0 It was discussed and introduced based on docs that some systems need this to shutdown properly. I can find no mention of this in ACPICA or ACPI docs. This needs to be re-evalutad, as all my shutdowns been successful after disabling it and I can't locate where this info came from or if it actually helped. See 1316462ab0fd0f25f7dbb7f23b607d981efd3edc for original commit and bug 12306 on current shutdown issue.
|
#
396b7422 |
|
10-Sep-2014 |
Paweł Dziepak <pdziepak@quarnos.org> |
kernel/x86_64: save fpu state at interrupts The kernel is allowed to use fpu anywhere so we must make sure that user state is not clobbered by saving fpu state at interrupt entry. There is no need to do that in case of system calls since all fpu data registers are caller saved. We do not need, though, to save the whole fpu state at task swich (again, thanks to calling convention). Only status and control registers are preserved. This patch actually adds xmm0-15 register to clobber list of task swich code, but the only reason of that is to make sure that nothing bad happens inside the function that executes that task swich. Inspection of the generated code shows that no xmm registers are actually saved. Signed-off-by: Paweł Dziepak <pdziepak@quarnos.org>
|
#
b41f2810 |
|
06-Sep-2014 |
Paweł Dziepak <pdziepak@quarnos.org> |
boot/x86_64: enable sse early Enable SSE as a part of the "preparation of the environment to run any C or C++ code" in the entry points of stage2 bootloader. SSE2 is going to be used by memset() and memcpy(). Signed-off-by: Paweł Dziepak <pdziepak@quarnos.org>
|
#
f2f91078 |
|
06-Sep-2014 |
Paweł Dziepak <pdziepak@quarnos.org> |
kernel/x86_64: remove memset and memcpy from commpage There is absolutely no reason for these functions to be in commpage, they don't do anything that involves the kernel in any way. Additionaly, this patch rewrites memset and memcpy to C++, current implementation is quite simple (though it may perform surprisingly well when dealing with large buffers on cpus with ermsb). Better versions are coming soon. Signed-off-by: Paweł Dziepak <pdziepak@quarnos.org>
|
#
6156a508 |
|
06-Sep-2014 |
Paweł Dziepak <pdziepak@quarnos.org> |
kernel/x86[_64]: remove get_optimized_functions from cpu modules The possibility to specify custom memcpy and memset implementations in cpu modules is currently unused and there is generally no point in such feature. There are only 2 x86 vendors that really matter and there isn't very big difference in performance of the generic optmized versions of these funcions across different models. Even if we wanted different versions of memset and memcpy depending on the processor model or features much better solution would be to use STT_GNU_IFUNC and save one indirect call. Long story short, we don't really benefit in any way from get_optimized_functions and the feature it implements and it only adds unnecessary complexity to the code. Signed-off-by: Paweł Dziepak <pdziepak@quarnos.org>
|
#
e6cfae45 |
|
02-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/x86: Make x2APIC CPU topology detection more future proof The main reason for this patch is to fix gcc 4.8.2 warning about hierarchyLevels possibly being used not initialized. Such thing actually can not happen since all x2APIC CPUs are aware of at least 3 topology levels. However, once more topology levels are introduced we will have to deal with CPUs that do not report information about all of them.
|
#
527da4ca |
|
27-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Separate bootloader and kernel GDT and IDT logic From now on bootloader sets up its own minimal valid GDT and IDT. Then the kernel replaces them with its own tables.
|
#
8cf8e537 |
|
05-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/x86: Inline atomic functions and memory barriers
|
#
046af755 |
|
29-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: Fix stack corruption in cache topology detection
|
#
611376fe |
|
16-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: Let each CPU have its own GDT
|
#
52b442a6 |
|
05-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: smp_cpu_rendezvous(): Use counter instead of bitmap
|
#
7db89e8d |
|
25-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Rework cpuidle module * Create new interface for cpuidle modules (similar to the cpufreq interface) * Generic cpuidle module is no longer needed * Fix and update Intel C-State module
|
#
077c84eb |
|
05-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: atomic_*() functions rework * No need for the atomically changed variables to be declared as volatile. * Drop support for atomically getting and setting unaligned data. * Introduce atomic_get_and_set[64]() which works the same as atomic_set[64]() used to. atomic_set[64]() does not return the previous value anymore.
|
#
cf863a50 |
|
16-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Decide whether to use simple or affine scheduler Simple scheduler is used when we do not have to worry about cache affinity (i.e. single core with or without SMT, multicore with all cache levels shared). When we replace gSchedulerLock with more fine grained locking affine scheduler should also be chosen when logical CPU count is high (regardless of cache).
|
#
29e65827 |
|
09-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Remove possibility to yield to all threads Kernel support for yielding to all (including lower priority) threads has been removed. POSIX sched_yield() remains unchanged. If a thread really needs to yield to everyone it can reduce its priority to the lowest possible and then yield (it will then need to manually return to its prvious priority upon continuing).
|
#
7039b950 |
|
05-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Fix style issues
|
#
7087b865 |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Remove superfluous memset()s
|
#
36cc64a9 |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Add CPU cache topology detection for AMD and Intel CPUs
|
#
1f50d090 |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/util: Add bit hack utilities
|
#
26c38618 |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Fix some style issues
|
#
fa6f78ae |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Use uint32 for maximum CPUID leaf number
|
#
c9b6f27d |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Add CPU topology detection for AMD processors
|
#
f1644d9d |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Set level shift by counting bits in mask
|
#
fafeda52 |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Do not return too soon from detectCPUTopology()
|
#
8ec89732 |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Add CPU topology detection for Intel processors
|
#
4110b730 |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Add support for CPUID sub-leaves Some CPUID leaves may contain one or more sub-leaves accessed by setting ECX to an appropriate value.
|
#
278f66b6 |
|
13-Sep-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Enable NX on non-boot CPUs as soon as possible
|
#
b8dc812f |
|
13-Sep-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Enable NX on non-boot CPUs as soon as possible
|
#
0edcbd27 |
|
26-Aug-2013 |
Jérôme Duval <jerome.duval@gmail.com> |
apic: serialize writes to x2apic MSR... as required by the specifications (it isn't needed with memory mapped i/o).
|
#
0fef11f1 |
|
22-Apr-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
arch: some CPUID leaves may be not available
|
#
103977d0 |
|
17-Apr-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
arch: NX is initialized too early on non-boot CPUs
|
#
e85e399f |
|
17-Mar-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
commpage: randomize position of commpage This patch introduces randomization of commpage position. From now on commpage table contains offsets from begining to of the commpage to the particular commpage entry. Similary addresses of symbols in ELF memory image "commpage" are just offsets from the begining of the commpage. This patch also updates KDL so that commpage entries are recognized and shown correctly in stack trace. An update of Debugger is yet to be done.
|
#
966f2076 |
|
06-Mar-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: enable data execution prevention Set execute disable bit for any page that belongs to area with neither B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set. In order to take advanage of NX bit in 32 bit protected mode PAE must be enabled. Thus, from now on it is also enabled when the CPU supports NX bit. vm_page_fault() takes additional argument which indicates whether page fault was caused by an illegal instruction fetch.
|
#
211f7132 |
|
06-Mar-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: move x86_userspace_thread_exit() from user stack to commpage x86_userspace_thread_exit() is a stub originally placed at the bottom of each thread user stack that ensures any thread invokes exit_thread() upon returning from its main higher level function. Putting anything that is expected to be executed on a stack causes problems when implementing data execution prevention. Code of x86_userspace_thread_exit() is now moved to commpage which seems to be much more appropriate place for it.
|
#
d1f280c8 |
|
01-Apr-2012 |
Hamish Morrison <hamishm53@gmail.com> |
Add support for pthread_attr_get/setguardsize() * Added the aforementioned functions. * create_area_etc() now takes a guard size parameter. * The thread_info::stack_base/end range now refers to the usable range only.
|
#
12e574a3 |
|
04-Nov-2012 |
Jérôme Duval <jerome.duval@gmail.com> |
enlarge the buffer for the CPU features string * 256 bytes wasn't enough for i5-2557m
|
#
9b0d045c |
|
03-Nov-2012 |
Fredrik Holmqvist <fredrik.holmqvist@gmail.com> |
Update to ACPICA 20121018. This is an update from 20120711 and A LOT has happened since then. See https://acpica.org/download/changes.txt for all the changes.
|
#
19187c46 |
|
03-Apr-2012 |
Yongcong Du <ycdu.vmcore@gmail.com> |
x86: Initialize IA32_MSR_ENERGY_PERF_BIAS The lowest 4 bits of the MSR serves as a hint to the hardware to favor performance or energy saving. 0 means a hint preference for highest performance while 15 corresponds to the maximum energy savings. A value of 7 translates into a hint to balance performance with energy savings. The default reset value of the MSR is 0. If BIOS doesn't intialize the MSR, the hardware will run in performance state. This patch initialize the MSR with value of 7 for balance between performance and energy savings Signed-off-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
|
#
d2a1be1c |
|
18-Aug-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Cleaner separation of 32-/64-bit specific CPU/interrupt code. Renamed {32,64}/int.cpp to {32,64}/descriptors.cpp, which now contain functions for GDT and TSS setup that were previously in arch_cpu.cpp, as well as the IDT setup code. These get called from the init functions in arch_cpu.cpp, rather than having a bunch of ifdef'd chunks of code for 32/64.
|
#
59ae45c1 |
|
21-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Fixed commpage for x86_64. Since the commpage is at a kernel address, changed 64-bit paging code to match x86's behaviour of allowing user-accessible mappings to be created in the kernel portion of the address space. This is also required by some drivers.
|
#
5234e66d |
|
21-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Optimized memcpy/memset for x86_64.
|
#
a51a5f3e |
|
12-Jul-2012 |
Fredrik Holmqvist <fredrik.holmqvist@gmail.com> |
Fixes to Haiku specific code to work with ACPICA 20120711.
|
#
76a1175d |
|
11-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Support for SMP on x86_64. No major changes to the kernel: just compiled in arch_smp.cpp and fixed the IDT load in arch_cpu_init_percpu to use the correct limit for x86_64 (uses sizeof(interrupt_descriptor)). In the boot loader, changed smp_boot_other_cpus to construct a temporary GDT and get the page directory address from CR3, as what's in kernel_args will be 64-bit stuff and will not work to switch the CPUs into 32-bit mode in the trampoline code. Refactored 64-bit kernel entry code to not use the stack after disabling paging, as the secondary CPUs are given a 32-bit virtual stack address by the SMP trampoline code which will no longer work.
|
#
609b308e |
|
10-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Return B_NOT_SUPPORTED for shutdown if ACPI is unavailable (no APM on x86_64).
|
#
5670b0a8 |
|
09-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Moved the 32-bit page fault handler to arch_int.cpp, use it for x86_64. A proper page fault handler was required for areas that were not locked into the kernel address space. This enables the boot process to get up to the point of trying to find the boot volume.
|
#
b5c9d24a |
|
09-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Implemented threading for x86_64. * Thread creation and switching is working fine, however threads do not yet get interrupted because I've not implemented hardware interrupt handling yet (I'll do that next). * I've made some changes to struct iframe: I've removed the e/r prefixes from the member names for both 32/64, so now they're just named ip, ax, bp, etc. This makes it easier to write code that works with both 32/64 without having to deal with different iframe member names.
|
#
8c5e7471 |
|
08-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Don't need to shift the factor in system_time(), just store the already shifted value.
|
#
5e9bb17d |
|
08-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Renamed remaining i386_* functions to x86_* for consistency.
|
#
cc248cf2 |
|
08-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
A couple of bug fixes. * mmu_get_virtual_mapping() should check that the page directory entry is present rather than assuming there's a page table there. This was resulting in some invalid mappings being created in the 64-bit virtual address space. * arch_vm_init_end() should clear from KERNEL_LOAD_BASE to virtual_end, not from KERNEL_BASE. On x86_64 this was causing it to loop through ~512GB of address space, which obviously was taking quite a while.
|
#
5c7d5218 |
|
08-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Implemented system_time() for x86_64. * Uses 64-bit multiplication, special handling for CPUs clocked < 1 GHz in system_time_nsecs() not required like on x86. * Tested against a straight conversion of the x86 version, noticably faster with a large number of system_time() calls.
|
#
e276cc04 |
|
05-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Finished implementation of x86_64 paging. * vm_init now runs up until create_preloaded_image_areas(), which needs fixing to handle ELF64. * Not completely tested. I know Map(), Unmap() and Query() work fine, the other methods have not been tested as the kernel doesn't boot far enough for any of them to be called yet. As far as I know they're correct, though. * Not yet implemented the destructor for X86VMTranslationMap64Bit or Init() for a user address space.
|
#
4304bb98 |
|
04-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Added arch_cpu.cpp to the x86_64 build. * Some things are currently ifndef'd out completely for x86_64 because they aren't implemented, there's a few other ifdef's to handle x86_64 differences but most of the code works unchanged. * Renamed some i386_* functions to x86_*. * Added a temporary method for setting the current thread on x86_64 (a global variable, not SMP safe). This will be changed to be done via the GS segment but I've not implemented that yet.
|
#
4e8fbfb2 |
|
03-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
x86_{read,write}_cr{0,4} can just be implemented as macros, put an x86_ prefix on the other read/write macros for consistency.
|
#
cbfe5fcd |
|
03-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Removed redundant x86 sources/headers.
|
#
fb8447d5 |
|
02-Jul-2012 |
Rene Gollent <anevilyak@gmail.com> |
Fix ticket #8650. - Replace arch_cpu_user_strlcpy() and arch_cpu_user_memset() with x86 assembly versions. These correctly handle the fault handler, which had broken again on gcc4 for the C versions, causing stack corruption in certain error cases. The other architectures will still need to have corresponding asm variants added in order for them to not run into the same issue though.
|
#
e5fc2bfc |
|
26-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Implemented long mode setup/switch code, the bootloader can now start the 64-bit kernel! The setup procedure is fairly simple: create a 64-bit GDT and 64-bit page tables that include all kernel mappings from the 32-bit address space, but at the correct 64-bit address, then go through kernel_args and changes all virtual addresses to 64-bit addresses, and finally switch to long mode and jump to the kernel.
|
#
45cf3294 |
|
03-Apr-2012 |
Yongcong Du <ycdu.vmcore@gmail.com> |
x86: add cpuid feature 6 flags Signed-off-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
|
#
cc586f16 |
|
07-Apr-2012 |
Yongcong Du <ycdu.vmcore@gmail.com> |
x86: AMD C1E with no ARAT(Always Running APIC Timer) idle support AMD C1E is a BIOS controlled C3 state. Certain processors families may cut off TSC and the lapic timer when it is in a deep C state, including C1E state, thus the cpu can't be waken up and system will hang. This patch firstly adds the support of idle selection during boot. Then it implements amdc1e_noarat_idle() routine which checks the MSR which contains the C1eOnCmpHalt (bit 28) and SmiOnCmpHalt (bit 27) before executing the halt instruction, then clear them once set. However intel C1E doesn't has such problem. AMD C1E is a BIOS controlled C3 state. The difference between C1E and C3 is that transition into C1E is not initiated by the operating system. System will enter C1E state automatically when both cores enters C1 state. As for intel C1E, it means "reduce CPU voltage before entering corresponding Cx-state". This patch may fix #8111, #3999, #7562, #7940 and #8060 Copied from the description of #3999: >but for some reason I hit the power button instead of the reset one. And >the boot continued!! The reason is CPUs are waken up once power button is hit. Signed-off-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
|
#
c1cd48b7 |
|
21-Feb-2012 |
Alexander von Gluck IV <kallisti5@unixzen.com> |
kernel: Fix fpu on non-apic systems * If apic is not present, the smp code never gets called to set up the fpu. * Detect lack of apic, and set up fpu in arch_cpu. * Should fix #8346 and #8348
|
#
3f1eed70 |
|
14-Feb-2012 |
Alexander von Gluck IV <kallisti5@unixzen.com> |
kernel: x86 SSE improvements * Prepend x86_ to non-static x86 code * Add x86_init_fpu function to kernel header * Don't init fpu multiple times on smp systems * Verified fpu is still started on smp and non-smp * SSE code still generates general protection faults on smp systems though
|
#
8dd1e875 |
|
20-Jan-2012 |
Alexander von Gluck IV <kallisti5@unixzen.com> |
kernel: Fix FPU SSE + MMX instruction usage. * Rename init_sse to init_fpu and handle FPU setup. * Stop trying to set up FPU before VM init. We tried to set up the FPU before VM init, then set it up again after VM init with SSE extensions, this caused SSE and MMX applications to crash. * Be more logical in FPU setup by detecting CPU flag prior to enabling FPU. (it's unlikely Haiku will run on a processor without a fpu... but lets be consistant) * SSE2 gcc code now runs (faster even) without GPF * tqh confirms his previously crashing mmx code now works * The non-SSE FPU enable after VM init needs tested!
|
#
11977ba8 |
|
20-Jun-2011 |
Jérôme Duval <korli@users.berlios.de> |
added more cpu feature flags for x86 git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42263 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e85a4793 |
|
02-Jan-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
strcpy() -> strlcpy() (CID 7986). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40081 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
7a1123a7 |
|
15-Aug-2010 |
Axel Dörfler <axeld@pinc-software.de> |
* Moved the "run me on the boot CPU" code to where it is actually used. * Added a TODO that thread_yield() doesn't like to be called from the idle thread. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38109 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1316462a |
|
20-Jul-2010 |
Axel Dörfler <axeld@pinc-software.de> |
* Added some test code to make sure that we run on the boot CPU on shutdown; I haven't tested it on the problematic machine yet, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37635 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a8ad734f |
|
14-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced structures {virtual,physical}_address_restrictions, which specify restrictions for virtual/physical addresses. * vm_page_allocate_page_run(): - Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not taken into account. - Takes a physical_address_restrictions instead of base/limit and also supports alignment and boundary restrictions, now. * map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/ ReserveAddressRange() take a virtual_address_restrictions parameter, now. They also support an alignment independent from the range size. * create_area_etc(), vm_create_anonymous_area(): Take {virtual,physical}_address_restrictions parameters, now. * Removed no longer needed B_PHYSICAL_BASE_ADDRESS. * DMAResources: - Fixed potential overflows of uint32 when initializing from device node attributes. - Fixed bounce buffer creation TODOs: By using create_area_etc() with the new restrictions parameters we can directly support physical high address, boundary, and alignment. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1b3e83ad |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved paging related files to new subdirectories paging and paging/32bit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37060 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5aa0503c |
|
07-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed i386_translation_map_get_pgdir() and adjusted the one place where it was used. * Renamed X86VMTranslationMap to X86VMTranslationMap32Bit and pulled the paging method agnostic part into new base class X86VMTranslationMap. * Moved X86PagingStructures into its own header/source pair. * Moved pgdir_virt from X86PagingStructures to X86PagingStructures32Bit where it is actually used. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37055 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
84217140 |
|
05-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
x86: * Renamed vm_translation_map_arch_info to X86PagingStructures, and all members and local variables of that type accordingly. * arch_thread_context_switch(): Added TODO: The still active paging structures can indeed be deleted before we stop using them. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37022 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
7198f765 |
|
05-May-2010 |
Axel Dörfler <axeld@pinc-software.de> |
* During early kernel startup, we must not create areas without the CREATE_AREA_DONT_WAIT flag; waiting at this point is not allowed. * I hope I found all occurences, but there might be some areas left (note, only those that don't use B_ALREADY_WIRED are problematic). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36624 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
26f1dd27 |
|
30-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added a third rendez-vous point for the call_all_cpus() MTRR functions. This fixes the problem that the CPU initiating the call could make the next call and reset sCpuRendezvous2 before the other CPUs have returned from their smp_cpu_rendezvous(). Probably virtually impossible on real hardware, but I could almost reliably reproduce it with qemu -smp 2 (would hang the late boot process without ability to enter KDL). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36559 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
dac21d8b |
|
18-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* map_physical_memory() does now always set a memory type. If none is given (it needs to be or'ed to the address specification), "uncached" is assumed. * Set the memory type for the "BIOS" and "DMA" areas to write-back. Not sure, if that's correct, but that's what was effectively used on my machines before. * Changed x86_set_mtrrs() and the CPU module hook to also set the default memory type. * Rewrote the MTRR computation once more: - Now we know all used memory ranges, so we are free to extend used ranges into unused ones in order to simplify them for MTRR setup. - Leverage the subtractive properties of uncached and write-through ranges to simplify ranges of any other respectively write-back type. - Set the default memory type to write-back, so we don't need MTRRs for the RAM ranges. - If a new range intersects with an existing one, we no longer just fail. Instead we use the strictest requirements implied by the ranges. This fixes #5383. Overall the new algorithm should be sufficient with far less MTRRs than before (on my desktop machine 4 are used at maximum, while 8 didn't quite suffice before). A drawback of the current implementation is that it doesn't deal with the case of running out of MTRRs at all, which might result in some ranges having weaker caching/memory ordering properties than requested. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35515 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bcc2c157 |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Refactored vm_translation_map: * Pulled the physical page mapping functions out of vm_translation_map into a new interface VMPhysicalPageMapper. * Renamed vm_translation_map to VMTranslationMap and made it a proper C++ class. The functions in the operations vector have become methods. * Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper as far as possible (without actually writing new code). * Adjusted the x86 and the PPC specifics accordingly (untested for the latter). For the other architectures the build is, I'm afraid, seriously broken. The next steps will modify and extend the VMTranslationMap interface, so that it will be possible to fix the bugs in vm_unmap_page[s]() and employ architecture specific optimizations. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
34a48c70 |
|
07-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added type nanotime_t (an int64 storing a nanoseconds value) and function system_time_nsecs(), returning the system time in nanoseconds. The function is only really implemented for x86. For the other architectures system_time() * 1000 is returned. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34543 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e50cf876 |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the VM headers into subdirectory vm/. * Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
90d870c1 |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved VMAddressSpace definition to vm_address_space.h. * "Classified" VMAddressSpace, i.e. turned the vm_address_space_*() functions into methods, made all attributes (but "areas") private, and added accessors. * Also turned the vm.cpp functions vm_area_lookup() and remove_area_from_address_space() into VMAddressSpace methods. The rest of the area management functionality will follow soon. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34447 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bb163c02 |
|
23-Nov-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added a set_mtrrs() hook to x86_cpu_module_info, which is supposed to set all MTRRs at once. * Added a respective x86_set_mtrrs() kernel function. * x86 CPU module: - Implemented the new hook. - Prefixed most debug output with the CPU index. Otherwise it gets quite confusing with multiple CPUs. - generic_init_mtrrs(): No longer clear all MTRRs, if they are already enabled. This lets us benefit from the BIOS's setup until we install our own -- otherwise with caching disabled things are *really* slow. * arch_vm.cpp: Completely rewrote the MTRR handling as the old one was not only slow (O(2^n)), but also broken (resulting in incorrect setups (e.g. with cachable ranges larger than requested)), and not working by design for certain cases (subtractive setups intersecting ranges added later). Now we maintain an array with the successfully set ranges. When a new range is added, we recompute the complete MTRR setup as we need to. The new algorithm analyzing the ranges has linear complexity and also handles range base addresses with an alignment not matching the range size (e.g. a range at address 0x1000 with size 0x2000) and joining of adjacent/overlapping ranges of the same type. This fixes the slow graphics on my 4 GB machine (though unfortunately the 8 MTRRs aren't enough to fully cover the complete frame buffer (about 35 pixel lines remain uncachable), but that can't be helped without rounding up the frame buffer size, for which we don't have enough information). It might also fix #1823. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34197 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e0d8627a |
|
17-Oct-2009 |
Axel Dörfler <axeld@pinc-software.de> |
* When you pressed ctrl-alt-del during the boot process, interrupts are disabled when you enter arch_cpu_shutdown(), so you must not try to load the ACPI module to reboot. DaaT, that should fix the problem you showed me. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33617 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
40d6120c |
|
14-Sep-2009 |
Jérôme Duval <korli@users.berlios.de> |
Patch from Vincent Duvert (edited by myself): Implement reboot via ACPI (#4459) git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33135 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
ee280b59 |
|
04-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Prevent the user TLB invalidation function from being preempted by turning off interrupts when invoking it. The user TLB invalidation function essentially only reads and writes back control register 3 (cr3) which holds the physical address of the current page directory. Still a preemption between the read and the write can cause problems when the last thread of a team dies and therefore the team is deleted. The context switch on preemption would decrement the refcount of the object that holds the page directory. Then the team address space is deleted causing the context switch returning to that thread to not re-acquire a reference to the object. At that point the page directory as set in cr3 is the one of the previously run thread (which is fine, as all share the kernel space mappings we need). Now when the preempted thread continues though, it would overwrite cr3 with the physical page directory address from before the context switch still stored in eax, therefore setting the page directory to the one of the dying thread that now doesn't have the corresponding reference. Further progressing the thread would release the last reference causing the deletion of the object and freeing of the, now active again, page directory. The memory getting overwritten (by deadbeef) now completely corrupts the page directory causing basically any memory access to fault, in the end resulting in a triplefault. This should fix bug #3399. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32118 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
671a2442 |
|
31-Jul-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
More work towards making our double fault handler less triple fault prone: * SMP: - Added smp_send_broadcast_ici_interrupts_disabled(), which is basically equivalent to smp_send_broadcast_ici(), but is only called with interrupts disabled and gets the CPU index, so it doesn't have to use smp_get_current_cpu() (which dereferences the current thread). - Added cpu index parameter to smp_intercpu_int_handler(). * x86: - arch_int.c -> arch_int.cpp - Set up an IDT per CPU. We were using a single IDT for all CPUs, but that can't work, since we need different tasks for the double fault interrupt vector. - Set the per CPU double fault task gates correctly. - Renamed set_intr_gate() to set_interrupt_gate and set_system_gate() to set_trap_gate() and documented them a bit. - Renamed double_fault_exception() x86_double_fault_exception() and fixed it not to use smp_get_current_cpu(). Instead we have the new x86_double_fault_get_cpu() that deducts the CPU index from the used stack. - Fixed the double_fault interrupt handler: It no longer calls int_bottom to avoid accessing the current thread. * debug.cpp: - Introduced explicit debug_double_fault() to enter the kernel debugger from a double fault handler. - Avoid using smp_get_current_cpu(). - Don't use kprintf() before sDebuggerOnCPU is set. Otherwise acquire_spinlock() is invoked by arch_debug_serial_puts(). Things look a bit better when the current thread pointer is broken -- we run into kernel_debugger_loop() and successfully print the "Welcome to KDL" message -- but we still dereference the thread pointer afterwards, so that we don't get a usable kernel debugger yet. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32050 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
cc77aba1 |
|
31-Jul-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Allocate a separate double fault stack for each CPU. * Added x86_double_fault_get_cpu(), a save way to get the CPU index when in the double fault handler. smp_get_current_cpu() requires at least a somewhat intact thread structure, so we rather want to avoid it when handling a double fault. There are a lot more of those dependencies in the KDL entry code. Working on it... git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32028 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
75557e0a |
|
17-Jun-2009 |
Axel Dörfler <axeld@pinc-software.de> |
* Minor cleanup. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@31078 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e0518baa |
|
20-Apr-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed incorrect loop condition. Thanks Francois for reviewing! (and sorry for mistreating your name -- Haiku's svn is to blame :-)). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30293 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8b3b05cb |
|
20-Apr-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Synchronize the TSCs of all CPUs early in the boot process, so system_time() will return consistent values. This helps with debug measurements for the time being. Obviously we'll have to think of something different when we support speed-stepping on models with frequency-dependent TSCs. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30287 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e293302d |
|
26-Jan-2009 |
Jérôme Duval <korli@users.berlios.de> |
* now init SSE on all CPUs, as I noted cr4 wasn't set correctly * this fixes the use of SSE instructions here on a dual core. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@29051 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
9a42ad7a |
|
22-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
When switching to a kernel thread we no longer set the page directory. This is not necessary, since userland teams' page directories also contain the kernel mappings, and avoids unnecessary TLB flushes. To make that possible the vm_translation_map_arch_info objects are reference counted now. This optimization reduces the kernel time of the Haiku build on my machine with SMP disabled a few percent, but interestingly the total time decreases only marginally. Haven't tested with SMP yet, but for full impact CPU affinity would be needed. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28287 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4a4abaf2 |
|
12-Oct-2008 |
Axel Dörfler <axeld@pinc-software.de> |
mmlr: * Actually call prepare_sleep_state() instead of calling enter_sleep_state() twice... * Commented out disabling interrupts when calling enter_sleep_state(), as our ACPI modules would then crash (needs memory & uses sems with interrupts disabled). This way, it at least works on some hardware, including emulators (as before). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28004 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b18c9b97 |
|
10-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Implemented x86 assembly version of memset(). * memset() is now available through the commpage. * CPU modules can provide a model-optimized memset(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27952 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8a85be46 |
|
24-Sep-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Register the commpage as an image and its entries as symbols. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27722 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4722f139 |
|
14-Sep-2008 |
Axel Dörfler <axeld@pinc-software.de> |
* If we're in the kernel debugger, we won't even try to use ACPI to power off, as we cannot do so with interrupts turned off (ACPI needs to allocate memory dynamically). * Turn off interrupts right before going to sleep (_GTS), this at least works in VMware, maybe it also works on real hardware. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27500 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
da4a0bff |
|
11-Sep-2008 |
Stefano Ceccherini <stefano.ceccherini@gmail.com> |
fix gcc4 build git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27411 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
cb387cfb |
|
10-Sep-2008 |
Axel Dörfler <axeld@pinc-software.de> |
* Added acpi_shutdown() method. If the ACPI bus manager is installed, this will be used now. Tested only with VMware so far. * apm_shutdown() is now called with interrupts turned on. * Renamed arch_cpu.c to arch_cpu.cpp. * Minor cleanup. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27404 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
396b74228eefcf4bc21333e05c1909b8692d1b86 |
|
10-Sep-2014 |
Paweł Dziepak <pdziepak@quarnos.org> |
kernel/x86_64: save fpu state at interrupts The kernel is allowed to use fpu anywhere so we must make sure that user state is not clobbered by saving fpu state at interrupt entry. There is no need to do that in case of system calls since all fpu data registers are caller saved. We do not need, though, to save the whole fpu state at task swich (again, thanks to calling convention). Only status and control registers are preserved. This patch actually adds xmm0-15 register to clobber list of task swich code, but the only reason of that is to make sure that nothing bad happens inside the function that executes that task swich. Inspection of the generated code shows that no xmm registers are actually saved. Signed-off-by: Paweł Dziepak <pdziepak@quarnos.org>
|
#
b41f281071b84235ea911f1e02123692798f706d |
|
06-Sep-2014 |
Paweł Dziepak <pdziepak@quarnos.org> |
boot/x86_64: enable sse early Enable SSE as a part of the "preparation of the environment to run any C or C++ code" in the entry points of stage2 bootloader. SSE2 is going to be used by memset() and memcpy(). Signed-off-by: Paweł Dziepak <pdziepak@quarnos.org>
|
#
f2f91078bdfb4cc008c2f87af2bcc4aedec85cbc |
|
06-Sep-2014 |
Paweł Dziepak <pdziepak@quarnos.org> |
kernel/x86_64: remove memset and memcpy from commpage There is absolutely no reason for these functions to be in commpage, they don't do anything that involves the kernel in any way. Additionaly, this patch rewrites memset and memcpy to C++, current implementation is quite simple (though it may perform surprisingly well when dealing with large buffers on cpus with ermsb). Better versions are coming soon. Signed-off-by: Paweł Dziepak <pdziepak@quarnos.org>
|
#
6156a508adb812153113f01aa1e547fff1e41bdb |
|
06-Sep-2014 |
Paweł Dziepak <pdziepak@quarnos.org> |
kernel/x86[_64]: remove get_optimized_functions from cpu modules The possibility to specify custom memcpy and memset implementations in cpu modules is currently unused and there is generally no point in such feature. There are only 2 x86 vendors that really matter and there isn't very big difference in performance of the generic optmized versions of these funcions across different models. Even if we wanted different versions of memset and memcpy depending on the processor model or features much better solution would be to use STT_GNU_IFUNC and save one indirect call. Long story short, we don't really benefit in any way from get_optimized_functions and the feature it implements and it only adds unnecessary complexity to the code. Signed-off-by: Paweł Dziepak <pdziepak@quarnos.org>
|
#
e6cfae450e1541792b9c015a8c25182015b096dc |
|
02-Feb-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/x86: Make x2APIC CPU topology detection more future proof The main reason for this patch is to fix gcc 4.8.2 warning about hierarchyLevels possibly being used not initialized. Such thing actually can not happen since all x2APIC CPUs are aware of at least 3 topology levels. However, once more topology levels are introduced we will have to deal with CPUs that do not report information about all of them.
|
#
527da4ca8a4c008b58da456c01a49dcf16a98fbc |
|
27-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Separate bootloader and kernel GDT and IDT logic From now on bootloader sets up its own minimal valid GDT and IDT. Then the kernel replaces them with its own tables.
|
#
8cf8e537740789b1b103f0aa0736dbfcf55359c2 |
|
05-Jan-2014 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/x86: Inline atomic functions and memory barriers
|
#
046af755d0baf974fb05cb7c017d59413de6a333 |
|
29-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: Fix stack corruption in cache topology detection
|
#
611376fef7e00967fb65342802ba668a807348d5 |
|
16-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: Let each CPU have its own GDT
|
#
52b442a687680ddd6a55478baeaa42ec87077f49 |
|
05-Dec-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: smp_cpu_rendezvous(): Use counter instead of bitmap
|
#
7db89e8dc395db73368479fd9817b2b67899f3f6 |
|
25-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Rework cpuidle module * Create new interface for cpuidle modules (similar to the cpufreq interface) * Generic cpuidle module is no longer needed * Fix and update Intel C-State module
|
#
077c84eb27b25430428d356f3d13afabc0cc0d13 |
|
05-Nov-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: atomic_*() functions rework * No need for the atomically changed variables to be declared as volatile. * Drop support for atomically getting and setting unaligned data. * Introduce atomic_get_and_set[64]() which works the same as atomic_set[64]() used to. atomic_set[64]() does not return the previous value anymore.
|
#
cf863a50401af89883ea314ccf54e16badd9439e |
|
16-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Decide whether to use simple or affine scheduler Simple scheduler is used when we do not have to worry about cache affinity (i.e. single core with or without SMT, multicore with all cache levels shared). When we replace gSchedulerLock with more fine grained locking affine scheduler should also be chosen when logical CPU count is high (regardless of cache).
|
#
29e65827fd93f67acbebcdbbe1f233b004a48e18 |
|
09-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel: Remove possibility to yield to all threads Kernel support for yielding to all (including lower priority) threads has been removed. POSIX sched_yield() remains unchanged. If a thread really needs to yield to everyone it can reduce its priority to the lowest possible and then yield (it will then need to manually return to its prvious priority upon continuing).
|
#
7039b950fb70c6138b7e75f1ec7f8df7eaf0740c |
|
05-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Fix style issues
|
#
7087b865e20218d675add23731e96286a4cf2e0c |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Remove superfluous memset()s
|
#
36cc64a9b3f1744e7a030248bb81526e9f37f3d6 |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Add CPU cache topology detection for AMD and Intel CPUs
|
#
1f50d09018c8cd7e573fcbdd8741712c5806ab7d |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
kernel/util: Add bit hack utilities
|
#
26c3861891cefb2246827c895d81dbb1a701a648 |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Fix some style issues
|
#
fa6f78aee77ab0cdb60af20dbf479218afeb8d33 |
|
02-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Use uint32 for maximum CPUID leaf number
|
#
c9b6f27d949a38ad0a02d55d29cc4dbe7b229069 |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Add CPU topology detection for AMD processors
|
#
f1644d9d0b31e411a43c73afc4fd46bf73718f8c |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Set level shift by counting bits in mask
|
#
fafeda52eacdec6babbb789ad214a91d6387d38e |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Do not return too soon from detectCPUTopology()
|
#
8ec897323ec1c6b6d2ed7e540d6118fbd3dd9855 |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Add CPU topology detection for Intel processors
|
#
4110b730dbee59f5515a0bf9997b6cd167965080 |
|
01-Oct-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Add support for CPUID sub-leaves Some CPUID leaves may contain one or more sub-leaves accessed by setting ECX to an appropriate value.
|
#
278f66b6b1dd47b3834c768308fa3d21a5eadb88 |
|
13-Sep-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Enable NX on non-boot CPUs as soon as possible
|
#
b8dc812f3e99db27af1d4e6495a305bfb830a507 |
|
13-Sep-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86[_64]: Enable NX on non-boot CPUs as soon as possible
|
#
0edcbd2754bbde71864f6ec2578a05084d4dc23d |
|
26-Aug-2013 |
Jérôme Duval <jerome.duval@gmail.com> |
apic: serialize writes to x2apic MSR... as required by the specifications (it isn't needed with memory mapped i/o).
|
#
0fef11f1a8b62e67aadf921fc6ddd31cad5b36bb |
|
22-Apr-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
arch: some CPUID leaves may be not available
|
#
103977d0a94f8218b2df110ee2f8a8157edf692f |
|
17-Apr-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
arch: NX is initialized too early on non-boot CPUs
|
#
e85e399fd7b229b8bc92f28928a059876d7216d3 |
|
17-Mar-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
commpage: randomize position of commpage This patch introduces randomization of commpage position. From now on commpage table contains offsets from begining to of the commpage to the particular commpage entry. Similary addresses of symbols in ELF memory image "commpage" are just offsets from the begining of the commpage. This patch also updates KDL so that commpage entries are recognized and shown correctly in stack trace. An update of Debugger is yet to be done.
|
#
966f207668d19610dae34d5331150e3742815bcf |
|
06-Mar-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: enable data execution prevention Set execute disable bit for any page that belongs to area with neither B_EXECUTE_AREA nor B_KERNEL_EXECUTE_AREA set. In order to take advanage of NX bit in 32 bit protected mode PAE must be enabled. Thus, from now on it is also enabled when the CPU supports NX bit. vm_page_fault() takes additional argument which indicates whether page fault was caused by an illegal instruction fetch.
|
#
211f71325a1c2c1f3c7d0efabe01506144fcd6ba |
|
06-Mar-2013 |
Pawel Dziepak <pdziepak@quarnos.org> |
x86: move x86_userspace_thread_exit() from user stack to commpage x86_userspace_thread_exit() is a stub originally placed at the bottom of each thread user stack that ensures any thread invokes exit_thread() upon returning from its main higher level function. Putting anything that is expected to be executed on a stack causes problems when implementing data execution prevention. Code of x86_userspace_thread_exit() is now moved to commpage which seems to be much more appropriate place for it.
|
#
d1f280c80529d5f0bc55030c2934f9255bc7f6a2 |
|
01-Apr-2012 |
Hamish Morrison <hamishm53@gmail.com> |
Add support for pthread_attr_get/setguardsize() * Added the aforementioned functions. * create_area_etc() now takes a guard size parameter. * The thread_info::stack_base/end range now refers to the usable range only.
|
#
12e574a316012b01690f84ab8a0fc2f6aedff9ae |
|
04-Nov-2012 |
Jérôme Duval <jerome.duval@gmail.com> |
enlarge the buffer for the CPU features string * 256 bytes wasn't enough for i5-2557m
|
#
9b0d045c59c1d03dbedf8c76ac88efe6bda7d8d0 |
|
03-Nov-2012 |
Fredrik Holmqvist <fredrik.holmqvist@gmail.com> |
Update to ACPICA 20121018. This is an update from 20120711 and A LOT has happened since then. See https://acpica.org/download/changes.txt for all the changes.
|
#
19187c464b4598774f731cd015c4fbc893c25348 |
|
03-Apr-2012 |
Yongcong Du <ycdu.vmcore@gmail.com> |
x86: Initialize IA32_MSR_ENERGY_PERF_BIAS The lowest 4 bits of the MSR serves as a hint to the hardware to favor performance or energy saving. 0 means a hint preference for highest performance while 15 corresponds to the maximum energy savings. A value of 7 translates into a hint to balance performance with energy savings. The default reset value of the MSR is 0. If BIOS doesn't intialize the MSR, the hardware will run in performance state. This patch initialize the MSR with value of 7 for balance between performance and energy savings Signed-off-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
|
#
d2a1be1c4e4a8ae3879d7f59b07a6924c62b4b14 |
|
18-Aug-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Cleaner separation of 32-/64-bit specific CPU/interrupt code. Renamed {32,64}/int.cpp to {32,64}/descriptors.cpp, which now contain functions for GDT and TSS setup that were previously in arch_cpu.cpp, as well as the IDT setup code. These get called from the init functions in arch_cpu.cpp, rather than having a bunch of ifdef'd chunks of code for 32/64.
|
#
59ae45c1ab32476f1fa428dae22989f8387a1f9e |
|
21-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Fixed commpage for x86_64. Since the commpage is at a kernel address, changed 64-bit paging code to match x86's behaviour of allowing user-accessible mappings to be created in the kernel portion of the address space. This is also required by some drivers.
|
#
5234e66d32184c0843e7c5020c23e28f88e50569 |
|
21-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Optimized memcpy/memset for x86_64.
|
#
a51a5f3e1e8dbcf4a92a041fa99e73475d2d5524 |
|
12-Jul-2012 |
Fredrik Holmqvist <fredrik.holmqvist@gmail.com> |
Fixes to Haiku specific code to work with ACPICA 20120711.
|
#
76a1175dbe1a314563ca18c0b7fb82695a9730cd |
|
11-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Support for SMP on x86_64. No major changes to the kernel: just compiled in arch_smp.cpp and fixed the IDT load in arch_cpu_init_percpu to use the correct limit for x86_64 (uses sizeof(interrupt_descriptor)). In the boot loader, changed smp_boot_other_cpus to construct a temporary GDT and get the page directory address from CR3, as what's in kernel_args will be 64-bit stuff and will not work to switch the CPUs into 32-bit mode in the trampoline code. Refactored 64-bit kernel entry code to not use the stack after disabling paging, as the secondary CPUs are given a 32-bit virtual stack address by the SMP trampoline code which will no longer work.
|
#
609b308e64d28acbcac1edde134b427f648914a4 |
|
10-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Return B_NOT_SUPPORTED for shutdown if ACPI is unavailable (no APM on x86_64).
|
#
5670b0a8e4fe8e5504b2e57a958e1590f6024406 |
|
09-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Moved the 32-bit page fault handler to arch_int.cpp, use it for x86_64. A proper page fault handler was required for areas that were not locked into the kernel address space. This enables the boot process to get up to the point of trying to find the boot volume.
|
#
b5c9d24abcc3599375153ed310b495ea944d46a0 |
|
09-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Implemented threading for x86_64. * Thread creation and switching is working fine, however threads do not yet get interrupted because I've not implemented hardware interrupt handling yet (I'll do that next). * I've made some changes to struct iframe: I've removed the e/r prefixes from the member names for both 32/64, so now they're just named ip, ax, bp, etc. This makes it easier to write code that works with both 32/64 without having to deal with different iframe member names.
|
#
8c5e74719039c7275a5a80b236b830b7b4ba1be7 |
|
08-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Don't need to shift the factor in system_time(), just store the already shifted value.
|
#
5e9bb17da7b9cdd76ff9072486fab90688cf8c36 |
|
08-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Renamed remaining i386_* functions to x86_* for consistency.
|
#
cc248cf2b37b96f8f86d652e0121be105b23d192 |
|
08-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
A couple of bug fixes. * mmu_get_virtual_mapping() should check that the page directory entry is present rather than assuming there's a page table there. This was resulting in some invalid mappings being created in the 64-bit virtual address space. * arch_vm_init_end() should clear from KERNEL_LOAD_BASE to virtual_end, not from KERNEL_BASE. On x86_64 this was causing it to loop through ~512GB of address space, which obviously was taking quite a while.
|
#
5c7d52183c2182761151ba2f8f72bb7b39e50053 |
|
08-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Implemented system_time() for x86_64. * Uses 64-bit multiplication, special handling for CPUs clocked < 1 GHz in system_time_nsecs() not required like on x86. * Tested against a straight conversion of the x86 version, noticably faster with a large number of system_time() calls.
|
#
e276cc0457a4ddb3f137504e220ee5e839f132d4 |
|
05-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Finished implementation of x86_64 paging. * vm_init now runs up until create_preloaded_image_areas(), which needs fixing to handle ELF64. * Not completely tested. I know Map(), Unmap() and Query() work fine, the other methods have not been tested as the kernel doesn't boot far enough for any of them to be called yet. As far as I know they're correct, though. * Not yet implemented the destructor for X86VMTranslationMap64Bit or Init() for a user address space.
|
#
4304bb9894335fe5e5bd667a1f27dc7605c2e5b9 |
|
04-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Added arch_cpu.cpp to the x86_64 build. * Some things are currently ifndef'd out completely for x86_64 because they aren't implemented, there's a few other ifdef's to handle x86_64 differences but most of the code works unchanged. * Renamed some i386_* functions to x86_*. * Added a temporary method for setting the current thread on x86_64 (a global variable, not SMP safe). This will be changed to be done via the GS segment but I've not implemented that yet.
|
#
4e8fbfb2d158de7b1cadd1c060acee51a7d67309 |
|
03-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
x86_{read,write}_cr{0,4} can just be implemented as macros, put an x86_ prefix on the other read/write macros for consistency.
|
#
cbfe5fcd171cee34562e5f86ef9586c027a1dd30 |
|
03-Jul-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Removed redundant x86 sources/headers.
|
#
fb8447d59586d40fc8987ede4795ebda64354839 |
|
02-Jul-2012 |
Rene Gollent <anevilyak@gmail.com> |
Fix ticket #8650. - Replace arch_cpu_user_strlcpy() and arch_cpu_user_memset() with x86 assembly versions. These correctly handle the fault handler, which had broken again on gcc4 for the C versions, causing stack corruption in certain error cases. The other architectures will still need to have corresponding asm variants added in order for them to not run into the same issue though.
|
#
e5fc2bfcab8c15a3ff7d33c358f9aa82ed73c823 |
|
26-Jun-2012 |
Alex Smith <alex@alex-smith.me.uk> |
Implemented long mode setup/switch code, the bootloader can now start the 64-bit kernel! The setup procedure is fairly simple: create a 64-bit GDT and 64-bit page tables that include all kernel mappings from the 32-bit address space, but at the correct 64-bit address, then go through kernel_args and changes all virtual addresses to 64-bit addresses, and finally switch to long mode and jump to the kernel.
|
#
45cf3294b25dfdc3444bb54983e03f0a43c0f51c |
|
03-Apr-2012 |
Yongcong Du <ycdu.vmcore@gmail.com> |
x86: add cpuid feature 6 flags Signed-off-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
|
#
cc586f1655b94c248be58ba1752b42bc39fbaf03 |
|
07-Apr-2012 |
Yongcong Du <ycdu.vmcore@gmail.com> |
x86: AMD C1E with no ARAT(Always Running APIC Timer) idle support AMD C1E is a BIOS controlled C3 state. Certain processors families may cut off TSC and the lapic timer when it is in a deep C state, including C1E state, thus the cpu can't be waken up and system will hang. This patch firstly adds the support of idle selection during boot. Then it implements amdc1e_noarat_idle() routine which checks the MSR which contains the C1eOnCmpHalt (bit 28) and SmiOnCmpHalt (bit 27) before executing the halt instruction, then clear them once set. However intel C1E doesn't has such problem. AMD C1E is a BIOS controlled C3 state. The difference between C1E and C3 is that transition into C1E is not initiated by the operating system. System will enter C1E state automatically when both cores enters C1 state. As for intel C1E, it means "reduce CPU voltage before entering corresponding Cx-state". This patch may fix #8111, #3999, #7562, #7940 and #8060 Copied from the description of #3999: >but for some reason I hit the power button instead of the reset one. And >the boot continued!! The reason is CPUs are waken up once power button is hit. Signed-off-by: Fredrik Holmqvist <fredrik.holmqvist@gmail.com>
|
#
c1cd48b72f6be4828edf9a25ed306f354e99dcd0 |
|
21-Feb-2012 |
Alexander von Gluck IV <kallisti5@unixzen.com> |
kernel: Fix fpu on non-apic systems * If apic is not present, the smp code never gets called to set up the fpu. * Detect lack of apic, and set up fpu in arch_cpu. * Should fix #8346 and #8348
|
#
3f1eed704a8c799a40cc005bf4cb904463148d79 |
|
14-Feb-2012 |
Alexander von Gluck IV <kallisti5@unixzen.com> |
kernel: x86 SSE improvements * Prepend x86_ to non-static x86 code * Add x86_init_fpu function to kernel header * Don't init fpu multiple times on smp systems * Verified fpu is still started on smp and non-smp * SSE code still generates general protection faults on smp systems though
|
#
8dd1e875c1d3735627166c6639078ae4419e7918 |
|
20-Jan-2012 |
Alexander von Gluck IV <kallisti5@unixzen.com> |
kernel: Fix FPU SSE + MMX instruction usage. * Rename init_sse to init_fpu and handle FPU setup. * Stop trying to set up FPU before VM init. We tried to set up the FPU before VM init, then set it up again after VM init with SSE extensions, this caused SSE and MMX applications to crash. * Be more logical in FPU setup by detecting CPU flag prior to enabling FPU. (it's unlikely Haiku will run on a processor without a fpu... but lets be consistant) * SSE2 gcc code now runs (faster even) without GPF * tqh confirms his previously crashing mmx code now works * The non-SSE FPU enable after VM init needs tested!
|
#
11977ba83b29ff77790cd7132980cc3d43d9b00e |
|
20-Jun-2011 |
Jérôme Duval <korli@users.berlios.de> |
added more cpu feature flags for x86 git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@42263 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e85a4793d2453150653419011185498204de97c8 |
|
02-Jan-2011 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
strcpy() -> strlcpy() (CID 7986). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@40081 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
7a1123a7bfbabda080e53613497c159a75e988ba |
|
15-Aug-2010 |
Axel Dörfler <axeld@pinc-software.de> |
* Moved the "run me on the boot CPU" code to where it is actually used. * Added a TODO that thread_yield() doesn't like to be called from the idle thread. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@38109 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1316462ab0fd0f25f7dbb7f23b607d981efd3edc |
|
20-Jul-2010 |
Axel Dörfler <axeld@pinc-software.de> |
* Added some test code to make sure that we run on the boot CPU on shutdown; I haven't tested it on the problematic machine yet, though. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37635 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
a8ad734f1c698917badb15e1641e0f38b3e9a013 |
|
14-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Introduced structures {virtual,physical}_address_restrictions, which specify restrictions for virtual/physical addresses. * vm_page_allocate_page_run(): - Fixed conversion of base/limit to array indexes. sPhysicalPageOffset was not taken into account. - Takes a physical_address_restrictions instead of base/limit and also supports alignment and boundary restrictions, now. * map_backing_store(), VM[User,Kernel]AddressSpace::InsertArea()/ ReserveAddressRange() take a virtual_address_restrictions parameter, now. They also support an alignment independent from the range size. * create_area_etc(), vm_create_anonymous_area(): Take {virtual,physical}_address_restrictions parameters, now. * Removed no longer needed B_PHYSICAL_BASE_ADDRESS. * DMAResources: - Fixed potential overflows of uint32 when initializing from device node attributes. - Fixed bounce buffer creation TODOs: By using create_area_etc() with the new restrictions parameters we can directly support physical high address, boundary, and alignment. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37131 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
1b3e83addefd97925b84cebaf4003d14c9062781 |
|
08-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Moved paging related files to new subdirectories paging and paging/32bit. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37060 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
5aa0503c7c1ce7ea4c0595d9a402e612bb290ec8 |
|
07-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Removed i386_translation_map_get_pgdir() and adjusted the one place where it was used. * Renamed X86VMTranslationMap to X86VMTranslationMap32Bit and pulled the paging method agnostic part into new base class X86VMTranslationMap. * Moved X86PagingStructures into its own header/source pair. * Moved pgdir_virt from X86PagingStructures to X86PagingStructures32Bit where it is actually used. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37055 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8421714089091fc545726be0654e13d29de1f1ae |
|
05-Jun-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
x86: * Renamed vm_translation_map_arch_info to X86PagingStructures, and all members and local variables of that type accordingly. * arch_thread_context_switch(): Added TODO: The still active paging structures can indeed be deleted before we stop using them. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@37022 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
7198f76564e4b062401bb95f6ddf540bbf9e8625 |
|
05-May-2010 |
Axel Dörfler <axeld@pinc-software.de> |
* During early kernel startup, we must not create areas without the CREATE_AREA_DONT_WAIT flag; waiting at this point is not allowed. * I hope I found all occurences, but there might be some areas left (note, only those that don't use B_ALREADY_WIRED are problematic). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36624 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
26f1dd2708c81b94937d4ef865c0c6137a0cddfb |
|
30-Apr-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added a third rendez-vous point for the call_all_cpus() MTRR functions. This fixes the problem that the CPU initiating the call could make the next call and reset sCpuRendezvous2 before the other CPUs have returned from their smp_cpu_rendezvous(). Probably virtually impossible on real hardware, but I could almost reliably reproduce it with qemu -smp 2 (would hang the late boot process without ability to enter KDL). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@36559 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
dac21d8bfe3fcb0ee34a4a0c866c2474bfb8b155 |
|
18-Feb-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* map_physical_memory() does now always set a memory type. If none is given (it needs to be or'ed to the address specification), "uncached" is assumed. * Set the memory type for the "BIOS" and "DMA" areas to write-back. Not sure, if that's correct, but that's what was effectively used on my machines before. * Changed x86_set_mtrrs() and the CPU module hook to also set the default memory type. * Rewrote the MTRR computation once more: - Now we know all used memory ranges, so we are free to extend used ranges into unused ones in order to simplify them for MTRR setup. - Leverage the subtractive properties of uncached and write-through ranges to simplify ranges of any other respectively write-back type. - Set the default memory type to write-back, so we don't need MTRRs for the RAM ranges. - If a new range intersects with an existing one, we no longer just fail. Instead we use the strictest requirements implied by the ranges. This fixes #5383. Overall the new algorithm should be sufficient with far less MTRRs than before (on my desktop machine 4 are used at maximum, while 8 didn't quite suffice before). A drawback of the current implementation is that it doesn't deal with the case of running out of MTRRs at all, which might result in some ranges having weaker caching/memory ordering properties than requested. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35515 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bcc2c157a1c54f5169de1e7a3e32c49e92bbe0aa |
|
13-Jan-2010 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Refactored vm_translation_map: * Pulled the physical page mapping functions out of vm_translation_map into a new interface VMPhysicalPageMapper. * Renamed vm_translation_map to VMTranslationMap and made it a proper C++ class. The functions in the operations vector have become methods. * Added class GenericVMPhysicalPageMapper implementing VMPhysicalPageMapper as far as possible (without actually writing new code). * Adjusted the x86 and the PPC specifics accordingly (untested for the latter). For the other architectures the build is, I'm afraid, seriously broken. The next steps will modify and extend the VMTranslationMap interface, so that it will be possible to fix the bugs in vm_unmap_page[s]() and employ architecture specific optimizations. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@35066 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
34a48c70efb9ce4fd54db2ef76d95e9c2ff9ec2e |
|
07-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Added type nanotime_t (an int64 storing a nanoseconds value) and function system_time_nsecs(), returning the system time in nanoseconds. The function is only really implemented for x86. For the other architectures system_time() * 1000 is returned. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34543 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e50cf8765be50a7454c9488db38b638cf90805af |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved the VM headers into subdirectory vm/. * Renamed vm_cache.h/vm_address_space.h to VMCache.h/VMAddressSpace. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34449 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
90d870c1556bdc415c7f41de5474ebebb0ceebdd |
|
02-Dec-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Moved VMAddressSpace definition to vm_address_space.h. * "Classified" VMAddressSpace, i.e. turned the vm_address_space_*() functions into methods, made all attributes (but "areas") private, and added accessors. * Also turned the vm.cpp functions vm_area_lookup() and remove_area_from_address_space() into VMAddressSpace methods. The rest of the area management functionality will follow soon. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34447 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
bb163c0289ef6ea5d1d6162f0178273c8933a7c0 |
|
23-Nov-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Added a set_mtrrs() hook to x86_cpu_module_info, which is supposed to set all MTRRs at once. * Added a respective x86_set_mtrrs() kernel function. * x86 CPU module: - Implemented the new hook. - Prefixed most debug output with the CPU index. Otherwise it gets quite confusing with multiple CPUs. - generic_init_mtrrs(): No longer clear all MTRRs, if they are already enabled. This lets us benefit from the BIOS's setup until we install our own -- otherwise with caching disabled things are *really* slow. * arch_vm.cpp: Completely rewrote the MTRR handling as the old one was not only slow (O(2^n)), but also broken (resulting in incorrect setups (e.g. with cachable ranges larger than requested)), and not working by design for certain cases (subtractive setups intersecting ranges added later). Now we maintain an array with the successfully set ranges. When a new range is added, we recompute the complete MTRR setup as we need to. The new algorithm analyzing the ranges has linear complexity and also handles range base addresses with an alignment not matching the range size (e.g. a range at address 0x1000 with size 0x2000) and joining of adjacent/overlapping ranges of the same type. This fixes the slow graphics on my 4 GB machine (though unfortunately the 8 MTRRs aren't enough to fully cover the complete frame buffer (about 35 pixel lines remain uncachable), but that can't be helped without rounding up the frame buffer size, for which we don't have enough information). It might also fix #1823. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@34197 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e0d8627a7349102c8ec81dd2146063f4290a8f55 |
|
17-Oct-2009 |
Axel Dörfler <axeld@pinc-software.de> |
* When you pressed ctrl-alt-del during the boot process, interrupts are disabled when you enter arch_cpu_shutdown(), so you must not try to load the ACPI module to reboot. DaaT, that should fix the problem you showed me. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33617 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
40d6120c3b777e678e79cbf7ac7b81fdcc31324d |
|
14-Sep-2009 |
Jérôme Duval <korli@users.berlios.de> |
Patch from Vincent Duvert (edited by myself): Implement reboot via ACPI (#4459) git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@33135 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
ee280b59e95cdd6ebec4519aa9b616e58de79f76 |
|
04-Aug-2009 |
Michael Lotz <mmlr@mlotz.ch> |
Prevent the user TLB invalidation function from being preempted by turning off interrupts when invoking it. The user TLB invalidation function essentially only reads and writes back control register 3 (cr3) which holds the physical address of the current page directory. Still a preemption between the read and the write can cause problems when the last thread of a team dies and therefore the team is deleted. The context switch on preemption would decrement the refcount of the object that holds the page directory. Then the team address space is deleted causing the context switch returning to that thread to not re-acquire a reference to the object. At that point the page directory as set in cr3 is the one of the previously run thread (which is fine, as all share the kernel space mappings we need). Now when the preempted thread continues though, it would overwrite cr3 with the physical page directory address from before the context switch still stored in eax, therefore setting the page directory to the one of the dying thread that now doesn't have the corresponding reference. Further progressing the thread would release the last reference causing the deletion of the object and freeing of the, now active again, page directory. The memory getting overwritten (by deadbeef) now completely corrupts the page directory causing basically any memory access to fault, in the end resulting in a triplefault. This should fix bug #3399. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32118 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
671a2442d93f46c5343ef34e01306befa760c16a |
|
31-Jul-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
More work towards making our double fault handler less triple fault prone: * SMP: - Added smp_send_broadcast_ici_interrupts_disabled(), which is basically equivalent to smp_send_broadcast_ici(), but is only called with interrupts disabled and gets the CPU index, so it doesn't have to use smp_get_current_cpu() (which dereferences the current thread). - Added cpu index parameter to smp_intercpu_int_handler(). * x86: - arch_int.c -> arch_int.cpp - Set up an IDT per CPU. We were using a single IDT for all CPUs, but that can't work, since we need different tasks for the double fault interrupt vector. - Set the per CPU double fault task gates correctly. - Renamed set_intr_gate() to set_interrupt_gate and set_system_gate() to set_trap_gate() and documented them a bit. - Renamed double_fault_exception() x86_double_fault_exception() and fixed it not to use smp_get_current_cpu(). Instead we have the new x86_double_fault_get_cpu() that deducts the CPU index from the used stack. - Fixed the double_fault interrupt handler: It no longer calls int_bottom to avoid accessing the current thread. * debug.cpp: - Introduced explicit debug_double_fault() to enter the kernel debugger from a double fault handler. - Avoid using smp_get_current_cpu(). - Don't use kprintf() before sDebuggerOnCPU is set. Otherwise acquire_spinlock() is invoked by arch_debug_serial_puts(). Things look a bit better when the current thread pointer is broken -- we run into kernel_debugger_loop() and successfully print the "Welcome to KDL" message -- but we still dereference the thread pointer afterwards, so that we don't get a usable kernel debugger yet. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32050 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
cc77aba1013a9d850fdf17c5a4338c3ad65c98b7 |
|
31-Jul-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Allocate a separate double fault stack for each CPU. * Added x86_double_fault_get_cpu(), a save way to get the CPU index when in the double fault handler. smp_get_current_cpu() requires at least a somewhat intact thread structure, so we rather want to avoid it when handling a double fault. There are a lot more of those dependencies in the KDL entry code. Working on it... git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@32028 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
75557e0ac305260450197fc7442b9780a0141bad |
|
17-Jun-2009 |
Axel Dörfler <axeld@pinc-software.de> |
* Minor cleanup. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@31078 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e0518baa0c961fdee315d1a00d2c692a4d0b8e1c |
|
20-Apr-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Fixed incorrect loop condition. Thanks Francois for reviewing! (and sorry for mistreating your name -- Haiku's svn is to blame :-)). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30293 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8b3b05cbf9cfc334c4abc5dc22d4706d36b93115 |
|
20-Apr-2009 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Synchronize the TSCs of all CPUs early in the boot process, so system_time() will return consistent values. This helps with debug measurements for the time being. Obviously we'll have to think of something different when we support speed-stepping on models with frequency-dependent TSCs. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@30287 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
e293302d3d1387c5cfe2ca1aed98aac2a7815b6e |
|
26-Jan-2009 |
Jérôme Duval <korli@users.berlios.de> |
* now init SSE on all CPUs, as I noted cr4 wasn't set correctly * this fixes the use of SSE instructions here on a dual core. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@29051 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
9a42ad7a77f11cf1b857e84ec70d21b1afaa71cd |
|
22-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
When switching to a kernel thread we no longer set the page directory. This is not necessary, since userland teams' page directories also contain the kernel mappings, and avoids unnecessary TLB flushes. To make that possible the vm_translation_map_arch_info objects are reference counted now. This optimization reduces the kernel time of the Haiku build on my machine with SMP disabled a few percent, but interestingly the total time decreases only marginally. Haven't tested with SMP yet, but for full impact CPU affinity would be needed. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28287 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4a4abaf25f6db9d87b65de7f4c5d9ba170f22d44 |
|
12-Oct-2008 |
Axel Dörfler <axeld@pinc-software.de> |
mmlr: * Actually call prepare_sleep_state() instead of calling enter_sleep_state() twice... * Commented out disabling interrupts when calling enter_sleep_state(), as our ACPI modules would then crash (needs memory & uses sems with interrupts disabled). This way, it at least works on some hardware, including emulators (as before). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@28004 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
b18c9b97aeb4a7af1c5bca0bc99f02ad19e716f4 |
|
10-Oct-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
* Implemented x86 assembly version of memset(). * memset() is now available through the commpage. * CPU modules can provide a model-optimized memset(). git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27952 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
8a85be4636d96fe300265720e9522aa372b5749c |
|
24-Sep-2008 |
Ingo Weinhold <ingo_weinhold@gmx.de> |
Register the commpage as an image and its entries as symbols. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27722 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
4722f139ea3d649ea1657d1ffe2351f3a38d9a6b |
|
14-Sep-2008 |
Axel Dörfler <axeld@pinc-software.de> |
* If we're in the kernel debugger, we won't even try to use ACPI to power off, as we cannot do so with interrupts turned off (ACPI needs to allocate memory dynamically). * Turn off interrupts right before going to sleep (_GTS), this at least works in VMware, maybe it also works on real hardware. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27500 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
da4a0bff5e57ba1db7c636a0adae105499b05935 |
|
11-Sep-2008 |
Stefano Ceccherini <stefano.ceccherini@gmail.com> |
fix gcc4 build git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27411 a95241bf-73f2-0310-859d-f6bbb57e9c96
|
#
cb387cfb2f88cf171120d5c8a93f4c2a9680db9f |
|
10-Sep-2008 |
Axel Dörfler <axeld@pinc-software.de> |
* Added acpi_shutdown() method. If the ACPI bus manager is installed, this will be used now. Tested only with VMware so far. * apm_shutdown() is now called with interrupts turned on. * Renamed arch_cpu.c to arch_cpu.cpp. * Minor cleanup. git-svn-id: file:///srv/svn/repos/haiku/haiku/trunk@27404 a95241bf-73f2-0310-859d-f6bbb57e9c96
|