決策樹(shù)說(shuō)白了就好像是if-else結(jié)構(gòu)一樣,它的結(jié)果就是你要生成這個(gè)一個(gè)可以從根開(kāi)始不斷判斷選擇到葉子節(jié)點(diǎn)的樹(shù),但是呢這里的if-else必然不會(huì)是讓我們認(rèn)為去設(shè)置的,我們要做的是提供一種方法,計(jì)算機(jī)可以根據(jù)這種方法得到我們所需要的決策樹(shù)。這個(gè)方法的重點(diǎn)就在于如何從這么多的特征中選擇出有價(jià)值的,并且按照最好的順序由根到葉選擇。完成了這個(gè)我們也就可以遞歸構(gòu)造一個(gè)決策樹(shù)了
劃分?jǐn)?shù)據(jù)集的最大原則是將無(wú)序的數(shù)據(jù)變得更加有序。既然這又牽涉到信息的有序無(wú)序問(wèn)題,自然要想到想弄的信息熵了。這里我們計(jì)算用的也是信息熵(另一種方法是基尼不純度)。公式如下:
1 數(shù)據(jù)必須是由列表元素組成的列表,而且所有的列白哦元素都要具有相同的數(shù)據(jù)長(zhǎng)度
2 數(shù)據(jù)的最后一列或者每個(gè)實(shí)例的最后一個(gè)元素應(yīng)是當(dāng)前實(shí)例的類(lèi)別標(biāo)簽
calcShannonEnt(dataSet)
計(jì)算數(shù)據(jù)集的香農(nóng)熵,分兩步,第一步計(jì)算頻率,第二部根據(jù)公式計(jì)算香農(nóng)熵splitDataSet(dataSet, aixs, value)
劃分?jǐn)?shù)據(jù)集,將滿(mǎn)足X[aixs]==value的值都劃分到一起,返回一個(gè)劃分好的集合(不包括用來(lái)劃分的aixs屬性,因?yàn)椴恍枰?br />chooseBestFeature(dataSet)
選擇最好的屬性進(jìn)行劃分,思路很簡(jiǎn)單就是對(duì)每個(gè)屬性都劃分下,看哪個(gè)好。這里使用到了一個(gè)set來(lái)選取列表中唯一的元素,這是一中很快的方法majorityCnt(classList)
因?yàn)槲覀冞f歸構(gòu)建決策樹(shù)是根據(jù)屬性的消耗進(jìn)行計(jì)算的,所以可能會(huì)存在最后屬性用完了,但是分類(lèi)還是沒(méi)有算完,這時(shí)候就會(huì)采用多數(shù)表決的方式計(jì)算節(jié)點(diǎn)分類(lèi)createTree(dataSet, labels)
基于遞歸構(gòu)建決策樹(shù)。這里的label更多是對(duì)于分類(lèi)特征的名字,為了更好看和后面的理解。
1 #coding=utf-8 2 import Operator 3 from math import log 4 import time 5 6 def createDataSet(): 7 dataSet=[[1,1,'yes'], 8 [1,1,'yes'], 9 [1,0,'no'],10 [0,1,'no'],11 [0,1,'no']]12 labels = ['no surfaceing','flippers']13 return dataSet, labels14 15 #計(jì)算香農(nóng)熵16 def calcShannonEnt(dataSet):17 numEntries = len(dataSet)18 labelCounts = {}19 for feaVec in dataSet:20 currentLabel = feaVec[-1]21 if currentLabel not in labelCounts:22 labelCounts[currentLabel] = 023 labelCounts[currentLabel] += 124 shannonEnt = 0.025 for key in labelCounts:26 PRob = float(labelCounts[key])/numEntries27 shannonEnt -= prob * log(prob, 2)28 return shannonEnt29 30 def splitDataSet(dataSet, axis, value):31 retDataSet = []32 for featVec in dataSet:33 if featVec[axis] == value:34 reducedFeatVec = featVec[:axis]35 reducedFeatVec.extend(featVec[axis+1:])36 retDataSet.append(reducedFeatVec)37 return retDataSet38 39 def chooseBestFeatureToSplit(dataSet):40 numFeatures = len(dataSet[0]) - 1#因?yàn)閿?shù)據(jù)集的最后一項(xiàng)是標(biāo)簽41 baseEntropy = calcShannonEnt(dataSet)42 bestInfoGain = 0.043 bestFeature = -144 for i in range(numFeatures):45 featList = [example[i] for example in dataSet]46 uniqueVals = set(featList)47 newEntropy = 0.048 for value in uniqueVals:49 subDataSet = splitDataSet(dataSet, i, value)50 prob = len(subDataSet) / float(len(dataSet))51 newEntropy += prob * calcShannonEnt(subDataSet)52 infoGain = baseEntropy -newEntropy53 if infoGain > bestInfoGain:54 bestInfoGain = infoGain55 bestFeature = i56 return bestFeature57 58 #因?yàn)槲覀冞f歸構(gòu)建決策樹(shù)是根據(jù)屬性的消耗進(jìn)行計(jì)算的,所以可能會(huì)存在最后屬性用完了,但是分類(lèi)59 #還是沒(méi)有算完,這時(shí)候就會(huì)采用多數(shù)表決的方式計(jì)算節(jié)點(diǎn)分類(lèi)60 def majorityCnt(classList):61 classCount = {}62 for vote in classList:63 if vote not in classCount.keys():64 classCount[vote] = 065 classCount[vote] += 166 return max(classCount) 67 68 def createTree(dataSet, labels):69 classList = [example[-1] for example in dataSet]70 if classList.count(classList[0]) ==len(classList):#類(lèi)別相同則停止劃分71 return classList[0]72 if len(dataSet[0]) == 1:#所有特征已經(jīng)用完73 return majorityCnt(classList)74 bestFeat = chooseBestFeatureToSplit(dataSet)75 bestFeatLabel = labels[bestFeat]76 myTree = {bestFeatLabel:{}}77 del(labels[bestFeat])78 featValues = [example[bestFeat] for example in dataSet]79 uniqueVals = set(featValues)80 for value in uniqueVals:81 subLabels = labels[:]#為了不改變?cè)剂斜淼膬?nèi)容復(fù)制了一下82 myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, 83 bestFeat, value),subLabels)84 return myTree85 86 def main():87 data,label = createDataSet()88 t1 = time.clock()89 myTree = createTree(data,label)90 t2 = time.clock()91 print myTree92 print 'execute for ',t2-t193 if __name__=='__main__':94 main()
新聞熱點(diǎn)
疑難解答
圖片精選
網(wǎng)友關(guān)注