国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

python中kmeans聚類實現代碼

2020-02-22 23:16:09
字體:
來源:轉載
供稿:網友

k-means算法思想較簡單,說的通俗易懂點就是物以類聚,花了一點時間在python中實現k-means算法,k-means算法有本身的缺點,比如說k初始位置的選擇,針對這個有不少人提出k-means++算法進行改進;另外一種是要對k大小的選擇也沒有很完善的理論,針對這個比較經典的理論是輪廓系數,二分聚類的算法確定k的大小,在最后還寫了二分聚類算法的實現,代碼主要參考機器學習實戰那本書:

#encoding:utf-8 ''''' Created on 2015年9月21日 @author: ZHOUMEIXU204 '''   path=u"D://Users//zhoumeixu204//Desktop//python語言機器學習//機器學習實戰代碼  python//機器學習實戰代碼//machinelearninginaction//Ch10//" import numpy as np def loadDataSet(fileName): #讀取數據   dataMat=[]   fr=open(fileName)   for line in fr.readlines():     curLine=line.strip().split('/t')     fltLine=map(float,curLine)     dataMat.append(fltLine)   return dataMat def distEclud(vecA,vecB):  #計算距離   return np.sqrt(np.sum(np.power(vecA-vecB,2))) def randCent(dataSet,k):   #構建鏃質心   n=np.shape(dataSet)[1]   centroids=np.mat(np.zeros((k,n)))   for j in range(n):     minJ=np.min(dataSet[:,j])     rangeJ=float(np.max(dataSet[:,j])-minJ)     centroids[:,j]=minJ+rangeJ*np.random.rand(k,1)   return centroids dataMat=np.mat(loadDataSet(path+'testSet.txt')) print(dataMat[:,0])   # 所有數都比-inf大 # 所有數都比+inf小 def kMeans(dataSet,k,distMeas=distEclud,createCent=randCent):   m=np.shape(dataSet)[0]   clusterAssment=np.mat(np.zeros((m,2)))   centroids=createCent(dataSet,k)   clusterChanged=True   while clusterChanged:     clusterChanged=False     for i in range(m):       minDist=np.inf;minIndex=-1 #np.inf表示無窮大       for j in range(k):         distJI=distMeas(centroids[j,:],dataSet[i,:])         if distJI           minDist=distJI;minIndex=j       if clusterAssment[i,0]!=minIndex:clusterChanged=True       clusterAssment[i,:]=minIndex,minDist**2     print centroids     for cent in range(k):       ptsInClust=dataSet[np.nonzero(clusterAssment[:,0].A==cent)[0]] #[0]這里取0是指去除坐標索引值,結果會有兩個       #np.nonzero函數,尋找非0元素的下標 nz=np.nonzero([1,2,3,0,0,4,0])結果為0,1,2       centroids[cent,:]=np.mean(ptsInClust,axis=0)        return centroids,clusterAssment myCentroids,clustAssing=kMeans(dataMat,4)  print(myCentroids,clustAssing)     #二分均值聚類(bisecting k-means) def  biKmeans(dataSet,k,distMeas=distEclud):   m=np.shape(dataSet)[0]   clusterAssment=np.mat(np.zeros((m,2)))   centroid0=np.mean(dataSet,axis=0).tolist()[0]   centList=[centroid0]   for j in range(m):     clusterAssment[j,1]=distMeas(np.mat(centroid0),dataSet[j,:])**2   while (len(centList)     lowestSSE=np.Inf     for i in range(len(centList)):       ptsInCurrCluster=dataSet[np.nonzero(clusterAssment[:,0].A==i)[0],:]       centroidMat,splitClusAss=kMeans(ptsInCurrCluster,2,distMeas)       sseSplit=np.sum(splitClusAss[:,1])       sseNotSplit=np.sum(clusterAssment[np.nonzero(clusterAssment[:,0].A!=i)[0],1])       print "sseSplit, and notSplit:",sseSplit,sseNotSplit       if (sseSplit+sseNotSplit)         bestCenToSplit=i         bestNewCents=centroidMat         bestClustAss=splitClusAss.copy()         lowestSSE=sseSplit+sseNotSplit     bestClustAss[np.nonzero(bestClustAss[:,0].A==1)[0],0]=len(centList)     bestClustAss[np.nonzero(bestClustAss[:,0].A==0)[0],0]=bestCenToSplit     print "the bestCentToSplit is:",bestCenToSplit     print 'the len of bestClustAss is:',len(bestClustAss)     centList[bestCenToSplit]=bestNewCents[0,:]     centList.append(bestNewCents[1,:])     clusterAssment[np.nonzero(clusterAssment[:,0].A==bestCenToSplit)[0],:]=bestClustAss   return centList,clusterAssment print(u"二分聚類分析結果開始") dataMat3=np.mat(loadDataSet(path+'testSet2.txt')) centList,myNewAssments=biKmeans(dataMat3, 3) print(centList)             
發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 宁远县| 青铜峡市| 保定市| 出国| 金乡县| 简阳市| 丘北县| 汪清县| 南部县| 望谟县| 乐安县| 肇东市| 萝北县| 哈巴河县| 康马县| 泗阳县| 固阳县| 沧源| 宜君县| 将乐县| 合水县| 铅山县| 南木林县| 武宁县| 江永县| 阳山县| 黔南| 霍林郭勒市| 柘荣县| 孟津县| 腾冲县| 类乌齐县| 宜川县| 谷城县| 鞍山市| 吴旗县| 临泽县| 和平区| 霞浦县| 英山县| 海安县|