1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 | #ifndef _X86_IRQFLAGS_H_ #define _X86_IRQFLAGS_H_ #include <asm/processor-flags.h> #ifndef __ASSEMBLY__ /* Provide __cpuidle; we can't safely include <linux/cpu.h> */ #define __cpuidle __attribute__((__section__(".cpuidle.text"))) /* * Interrupt control: */ static inline unsigned long native_save_fl(void) { unsigned long flags; /* * "=rm" is safe here, because "pop" adjusts the stack before * it evaluates its effective address -- this is part of the * documented behavior of the "pop" instruction. */ asm volatile("# __raw_save_flags\n\t" "pushf ; pop %0" : "=rm" (flags) : /* no input */ : "memory"); return flags; } static inline void native_restore_fl(unsigned long flags) { asm volatile("push %0 ; popf" : /* no output */ :"g" (flags) :"memory", "cc"); } static inline void native_irq_disable(void) { asm volatile("cli": : :"memory"); } static inline void native_irq_enable(void) { asm volatile("sti": : :"memory"); } static inline __cpuidle void native_safe_halt(void) { asm volatile("sti; hlt": : :"memory"); } static inline __cpuidle void native_halt(void) { asm volatile("hlt": : :"memory"); } #endif #ifdef [31mCONFIG_PARAVIRT[0m #include <asm/paravirt.h> #else #ifndef __ASSEMBLY__ #include <linux/types.h> static inline notrace unsigned long arch_local_save_flags(void) { return native_save_fl(); } static inline notrace void arch_local_irq_restore(unsigned long flags) { native_restore_fl(flags); } static inline notrace void arch_local_irq_disable(void) { native_irq_disable(); } static inline notrace void arch_local_irq_enable(void) { native_irq_enable(); } /* * Used in the idle loop; sti takes one instruction cycle * to complete: */ static inline __cpuidle void arch_safe_halt(void) { native_safe_halt(); } /* * Used when interrupts are already enabled or to * shutdown the processor: */ static inline __cpuidle void halt(void) { native_halt(); } /* * For spinlocks, etc: */ static inline notrace unsigned long arch_local_irq_save(void) { unsigned long flags = arch_local_save_flags(); arch_local_irq_disable(); return flags; } #else #define ENABLE_INTERRUPTS(x) sti #define DISABLE_INTERRUPTS(x) cli #ifdef [31mCONFIG_X86_64[0m #define SWAPGS swapgs /* * Currently paravirt can't handle swapgs nicely when we * don't have a stack we can rely on (such as a user space * stack). So we either find a way around these or just fault * and emulate if a guest tries to call swapgs directly. * * Either way, this is a good way to document that we don't * have a reliable stack. x86_64 only. */ #define SWAPGS_UNSAFE_STACK swapgs #define PARAVIRT_ADJUST_EXCEPTION_FRAME /* */ #define INTERRUPT_RETURN jmp native_iret #define USERGS_SYSRET64 \ swapgs; \ sysretq; #define USERGS_SYSRET32 \ swapgs; \ sysretl #else #define INTERRUPT_RETURN iret #define ENABLE_INTERRUPTS_SYSEXIT sti; sysexit #define GET_CR0_INTO_EAX movl %cr0, %eax #endif #endif /* __ASSEMBLY__ */ #endif /* CONFIG_PARAVIRT */ #ifndef __ASSEMBLY__ static inline int arch_irqs_disabled_flags(unsigned long flags) { return !(flags & X86_EFLAGS_IF); } static inline int arch_irqs_disabled(void) { unsigned long flags = arch_local_save_flags(); return arch_irqs_disabled_flags(flags); } #endif /* !__ASSEMBLY__ */ #ifdef __ASSEMBLY__ #ifdef [31mCONFIG_TRACE_IRQFLAGS[0m # define TRACE_IRQS_ON call trace_hardirqs_on_thunk; # define TRACE_IRQS_OFF call trace_hardirqs_off_thunk; #else # define TRACE_IRQS_ON # define TRACE_IRQS_OFF #endif #ifdef [31mCONFIG_DEBUG_LOCK_ALLOC[0m # ifdef [31mCONFIG_X86_64[0m # define LOCKDEP_SYS_EXIT call lockdep_sys_exit_thunk # define LOCKDEP_SYS_EXIT_IRQ \ TRACE_IRQS_ON; \ sti; \ call lockdep_sys_exit_thunk; \ cli; \ TRACE_IRQS_OFF; # else # define LOCKDEP_SYS_EXIT \ pushl %eax; \ pushl %ecx; \ pushl %edx; \ call lockdep_sys_exit; \ popl %edx; \ popl %ecx; \ popl %eax; # define LOCKDEP_SYS_EXIT_IRQ # endif #else # define LOCKDEP_SYS_EXIT # define LOCKDEP_SYS_EXIT_IRQ #endif #endif /* __ASSEMBLY__ */ #endif |