patch-r286033-llvm-r219009-x86-codegen-crash.diff revision 286034
1Pull in r219009 from upstream llvm trunk (by Adam Nemet): 2 3 [ISel] Keep matching state consistent when folding during X86 address match 4 5 In the X86 backend, matching an address is initiated by the 'addr' complex 6 pattern and its friends. During this process we may reassociate and-of-shift 7 into shift-of-and (FoldMaskedShiftToScaledMask) to allow folding of the 8 shift into the scale of the address. 9 10 However as demonstrated by the testcase, this can trigger CSE of not only the 11 shift and the AND which the code is prepared for but also the underlying load 12 node. In the testcase this node is sitting in the RecordedNode and MatchScope 13 data structures of the matcher and becomes a deleted node upon CSE. Returning 14 from the complex pattern function, we try to access it again hitting an assert 15 because the node is no longer a load even though this was checked before. 16 17 Now obviously changing the DAG this late is bending the rules but I think it 18 makes sense somewhat. Outside of addresses we prefer and-of-shift because it 19 may lead to smaller immediates (FoldMaskAndShiftToScale is an even better 20 example because it create a non-canonical node). We currently don't recognize 21 addresses during DAGCombiner where arguably this canonicalization should be 22 performed. On the other hand, having this in the matcher allows us to cover 23 all the cases where an address can be used in an instruction. 24 25 I've also talked a little bit to Dan Gohman on llvm-dev who added the RAUW for 26 the new shift node in FoldMaskedShiftToScaledMask. This RAUW is responsible 27 for initiating the recursive CSE on users 28 (http://lists.cs.uiuc.edu/pipermail/llvmdev/2014-September/076903.html) but it 29 is not strictly necessary since the shift is hooked into the visited user. Of 30 course it's safer to keep the DAG consistent at all times (e.g. for accurate 31 number of uses, etc.). 32 33 So rather than changing the fundamentals, I've decided to continue along the 34 previous patches and detect the CSE. This patch installs a very targeted 35 DAGUpdateListener for the duration of a complex-pattern match and updates the 36 matching state accordingly. (Previous patches used HandleSDNode to detect the 37 CSE but that's not practical here). The listener is only installed on X86. 38 39 I tested that there is no measurable overhead due to this while running 40 through the spec2k BC files with llc. The only thing we pay for is the 41 creation of the listener. The callback never ever triggers in spec2k since 42 this is a corner case. 43 44 Fixes rdar://problem/18206171 45 46This fixes a possible crash in x86 code generation when compiling recent 47llvm/clang trunk sources. 48 49Introduced here: http://svnweb.freebsd.org/changeset/base/286033 50 51Index: include/llvm/CodeGen/SelectionDAGISel.h 52=================================================================== 53--- include/llvm/CodeGen/SelectionDAGISel.h 54+++ include/llvm/CodeGen/SelectionDAGISel.h 55@@ -238,6 +238,12 @@ class SelectionDAGISel : public MachineFunctionPas 56 const unsigned char *MatcherTable, 57 unsigned TableSize); 58 59+ /// \brief Return true if complex patterns for this target can mutate the 60+ /// DAG. 61+ virtual bool ComplexPatternFuncMutatesDAG() const { 62+ return false; 63+ } 64+ 65 private: 66 67 // Calls to these functions are generated by tblgen. 68Index: lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp 69=================================================================== 70--- lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp 71+++ lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp 72@@ -2345,6 +2345,45 @@ struct MatchScope { 73 bool HasChainNodesMatched, HasGlueResultNodesMatched; 74 }; 75 76+/// \\brief A DAG update listener to keep the matching state 77+/// (i.e. RecordedNodes and MatchScope) uptodate if the target is allowed to 78+/// change the DAG while matching. X86 addressing mode matcher is an example 79+/// for this. 80+class MatchStateUpdater : public SelectionDAG::DAGUpdateListener 81+{ 82+ SmallVectorImpl<std::pair<SDValue, SDNode*> > &RecordedNodes; 83+ SmallVectorImpl<MatchScope> &MatchScopes; 84+public: 85+ MatchStateUpdater(SelectionDAG &DAG, 86+ SmallVectorImpl<std::pair<SDValue, SDNode*> > &RN, 87+ SmallVectorImpl<MatchScope> &MS) : 88+ SelectionDAG::DAGUpdateListener(DAG), 89+ RecordedNodes(RN), MatchScopes(MS) { } 90+ 91+ void NodeDeleted(SDNode *N, SDNode *E) { 92+ // Some early-returns here to avoid the search if we deleted the node or 93+ // if the update comes from MorphNodeTo (MorphNodeTo is the last thing we 94+ // do, so it's unnecessary to update matching state at that point). 95+ // Neither of these can occur currently because we only install this 96+ // update listener during matching a complex patterns. 97+ if (!E || E->isMachineOpcode()) 98+ return; 99+ // Performing linear search here does not matter because we almost never 100+ // run this code. You'd have to have a CSE during complex pattern 101+ // matching. 102+ for (SmallVectorImpl<std::pair<SDValue, SDNode*> >::iterator I = 103+ RecordedNodes.begin(), IE = RecordedNodes.end(); I != IE; ++I) 104+ if (I->first.getNode() == N) 105+ I->first.setNode(E); 106+ 107+ for (SmallVectorImpl<MatchScope>::iterator I = MatchScopes.begin(), 108+ IE = MatchScopes.end(); I != IE; ++I) 109+ for (SmallVector<SDValue, 4>::iterator J = I->NodeStack.begin(), 110+ JE = I->NodeStack.end(); J != JE; ++J) 111+ if (J->getNode() == N) 112+ J->setNode(E); 113+ } 114+}; 115 } 116 117 SDNode *SelectionDAGISel:: 118@@ -2599,6 +2638,14 @@ SelectCodeCommon(SDNode *NodeToMatch, const unsign 119 unsigned CPNum = MatcherTable[MatcherIndex++]; 120 unsigned RecNo = MatcherTable[MatcherIndex++]; 121 assert(RecNo < RecordedNodes.size() && "Invalid CheckComplexPat"); 122+ 123+ // If target can modify DAG during matching, keep the matching state 124+ // consistent. 125+ OwningPtr<MatchStateUpdater> MSU; 126+ if (ComplexPatternFuncMutatesDAG()) 127+ MSU.reset(new MatchStateUpdater(*CurDAG, RecordedNodes, 128+ MatchScopes)); 129+ 130 if (!CheckComplexPattern(NodeToMatch, RecordedNodes[RecNo].second, 131 RecordedNodes[RecNo].first, CPNum, 132 RecordedNodes)) 133Index: lib/Target/X86/X86ISelDAGToDAG.cpp 134=================================================================== 135--- lib/Target/X86/X86ISelDAGToDAG.cpp 136+++ lib/Target/X86/X86ISelDAGToDAG.cpp 137@@ -290,6 +290,13 @@ namespace { 138 const X86InstrInfo *getInstrInfo() const { 139 return getTargetMachine().getInstrInfo(); 140 } 141+ 142+ /// \brief Address-mode matching performs shift-of-and to and-of-shift 143+ /// reassociation in order to expose more scaled addressing 144+ /// opportunities. 145+ bool ComplexPatternFuncMutatesDAG() const { 146+ return true; 147+ } 148 }; 149 } 150 151Index: test/CodeGen/X86/addr-mode-matcher.ll 152=================================================================== 153--- test/CodeGen/X86/addr-mode-matcher.ll 154+++ test/CodeGen/X86/addr-mode-matcher.ll 155@@ -0,0 +1,62 @@ 156+; RUN: llc < %s | FileCheck %s 157+ 158+; This testcase used to hit an assert during ISel. For details, see the big 159+; comment inside the function. 160+ 161+; CHECK-LABEL: foo: 162+; The AND should be turned into a subreg access. 163+; CHECK-NOT: and 164+; The shift (leal) should be folded into the scale of the address in the load. 165+; CHECK-NOT: leal 166+; CHECK: movl {{.*}},4), 167+ 168+target datalayout = "e-m:o-p:32:32-f64:32:64-f80:128-n8:16:32-S128" 169+target triple = "i386-apple-macosx10.6.0" 170+ 171+define void @foo(i32 %a) { 172+bb: 173+ br label %bb1692 174+ 175+bb1692: 176+ %tmp1694 = phi i32 [ 0, %bb ], [ %tmp1745, %bb1692 ] 177+ %xor = xor i32 0, %tmp1694 178+ 179+; %load1 = (load (and (shl %xor, 2), 1020)) 180+ %tmp1701 = shl i32 %xor, 2 181+ %tmp1702 = and i32 %tmp1701, 1020 182+ %tmp1703 = getelementptr inbounds [1028 x i8]* null, i32 0, i32 %tmp1702 183+ %tmp1704 = bitcast i8* %tmp1703 to i32* 184+ %load1 = load i32* %tmp1704, align 4 185+ 186+; %load2 = (load (shl (and %xor, 255), 2)) 187+ %tmp1698 = and i32 %xor, 255 188+ %tmp1706 = shl i32 %tmp1698, 2 189+ %tmp1707 = getelementptr inbounds [1028 x i8]* null, i32 0, i32 %tmp1706 190+ %tmp1708 = bitcast i8* %tmp1707 to i32* 191+ %load2 = load i32* %tmp1708, align 4 192+ 193+ %tmp1710 = or i32 %load2, %a 194+ 195+; While matching xor we address-match %load1. The and-of-shift reassocication 196+; in address matching transform this into into a shift-of-and and the resuting 197+; node becomes identical to %load2. CSE replaces %load1 which leaves its 198+; references in MatchScope and RecordedNodes stale. 199+ %tmp1711 = xor i32 %load1, %tmp1710 200+ 201+ %tmp1744 = getelementptr inbounds [256 x i32]* null, i32 0, i32 %tmp1711 202+ store i32 0, i32* %tmp1744, align 4 203+ %tmp1745 = add i32 %tmp1694, 1 204+ indirectbr i8* undef, [label %bb1756, label %bb1692] 205+ 206+bb1756: 207+ br label %bb2705 208+ 209+bb2705: 210+ indirectbr i8* undef, [label %bb5721, label %bb5736] 211+ 212+bb5721: 213+ br label %bb2705 214+ 215+bb5736: 216+ ret void 217+} 218