首页 > 编程 > Python > 正文

python实现kMeans算法

2019-11-25 15:29:46
字体:
来源:转载
供稿:网友

聚类是一种无监督的学习,将相似的对象放到同一簇中,有点像是全自动分类,簇内的对象越相似,簇间的对象差别越大,则聚类效果越好。

1、k均值聚类算法

k均值聚类将数据分为k个簇,每个簇通过其质心,即簇中所有点的中心来描述。首先随机确定k个初始点作为质心,然后将数据集分配到距离最近的簇中。然后将每个簇的质心更新为所有数据集的平均值。然后再进行第二次划分数据集,直到聚类结果不再变化为止。

伪代码为

随机创建k个簇质心
当任意一个点的簇分配发生改变时:
    对数据集中的每个数据点:
        对每个质心:
            计算数据集到质心的距离
        将数据集分配到最近距离质心对应的簇
    对每一个簇,计算簇中所有点的均值并将均值作为质心

python实现

import numpy as npimport matplotlib.pyplot as pltdef loadDataSet(fileName):  dataMat = []  with open(fileName) as f:  for line in f.readlines():   line = line.strip().split('/t')   dataMat.append(line) dataMat = np.array(dataMat).astype(np.float32) return dataMatdef distEclud(vecA,vecB): return np.sqrt(np.sum(np.power((vecA-vecB),2)))def randCent(dataSet,k): m = np.shape(dataSet)[1] center = np.mat(np.ones((k,m))) for i in range(m):  centmin = min(dataSet[:,i])  centmax = max(dataSet[:,i])  center[:,i] = centmin + (centmax - centmin) * np.random.rand(k,1) return centerdef kMeans(dataSet,k,distMeans = distEclud,createCent = randCent): m = np.shape(dataSet)[0] clusterAssment = np.mat(np.zeros((m,2))) centroids = createCent(dataSet,k) clusterChanged = True while clusterChanged:  clusterChanged = False  for i in range(m):   minDist = np.inf   minIndex = -1   for j in range(k):    distJI = distMeans(dataSet[i,:],centroids[j,:])    if distJI < minDist:     minDist = distJI     minIndex = j   if clusterAssment[i,0] != minIndex:    clusterChanged = True   clusterAssment[i,:] = minIndex,minDist**2  for cent in range(k):   ptsInClust = dataSet[np.nonzero(clusterAssment[:,0].A == cent)[0]]   centroids[cent,:] = np.mean(ptsInClust,axis = 0) return centroids,clusterAssmentdata = loadDataSet('testSet.txt')muCentroids, clusterAssing = kMeans(data,4)fig = plt.figure(0)ax = fig.add_subplot(111)ax.scatter(data[:,0],data[:,1],c = clusterAssing[:,0].A)plt.show()print(clusterAssing)

2、二分k均值算法

K均值算法可能会收敛到局部最小值,而非全局最小。一种用于度量聚类效果的指标为误差平方和(SSE)。因为取了平方,更加重视原理中心的点。为了克服k均值算法可能会收敛到局部最小值的问题,有人提出来二分k均值算法。
首先将所有点作为一个簇,然后将该簇一分为二,然后选择所有簇中对其划分能够最大程度减低SSE的值的簇,直到满足指定簇数为止。

伪代码

将所有点看成一个簇
计算SSE
while 当簇数目小于k时:
    for 每一个簇:
        计算总误差
        在给定的簇上进行k均值聚类(k=2)
        计算将该簇一分为二的总误差
    选择使得误差最小的那个簇进行划分操作

python实现

import numpy as npimport matplotlib.pyplot as pltdef loadDataSet(fileName):  dataMat = []  with open(fileName) as f:  for line in f.readlines():   line = line.strip().split('/t')   dataMat.append(line) dataMat = np.array(dataMat).astype(np.float32) return dataMatdef distEclud(vecA,vecB): return np.sqrt(np.sum(np.power((vecA-vecB),2)))def randCent(dataSet,k): m = np.shape(dataSet)[1] center = np.mat(np.ones((k,m))) for i in range(m):  centmin = min(dataSet[:,i])  centmax = max(dataSet[:,i])  center[:,i] = centmin + (centmax - centmin) * np.random.rand(k,1) return centerdef kMeans(dataSet,k,distMeans = distEclud,createCent = randCent): m = np.shape(dataSet)[0] clusterAssment = np.mat(np.zeros((m,2))) centroids = createCent(dataSet,k) clusterChanged = True while clusterChanged:  clusterChanged = False  for i in range(m):   minDist = np.inf   minIndex = -1   for j in range(k):    distJI = distMeans(dataSet[i,:],centroids[j,:])    if distJI < minDist:     minDist = distJI     minIndex = j   if clusterAssment[i,0] != minIndex:    clusterChanged = True   clusterAssment[i,:] = minIndex,minDist**2  for cent in range(k):   ptsInClust = dataSet[np.nonzero(clusterAssment[:,0].A == cent)[0]]   centroids[cent,:] = np.mean(ptsInClust,axis = 0) return centroids,clusterAssmentdef biKmeans(dataSet,k,distMeans = distEclud): m = np.shape(dataSet)[0] clusterAssment = np.mat(np.zeros((m,2))) centroid0 = np.mean(dataSet,axis=0).tolist() centList = [centroid0] for j in range(m):  clusterAssment[j,1] = distMeans(dataSet[j,:],np.mat(centroid0))**2 while (len(centList)<k):  lowestSSE = np.inf  for i in range(len(centList)):   ptsInCurrCluster = dataSet[np.nonzero(clusterAssment[:,0].A == i)[0],:]   centroidMat,splitClustAss = kMeans(ptsInCurrCluster,2,distMeans)   sseSplit = np.sum(splitClustAss[:,1])   sseNotSplit = np.sum(clusterAssment[np.nonzero(clusterAssment[:,0].A != i)[0],1])   if (sseSplit + sseNotSplit) < lowestSSE:    bestCentToSplit = i    bestNewCents = centroidMat.copy()    bestClustAss = splitClustAss.copy()    lowestSSE = sseSplit + sseNotSplit  print('the best cent to split is ',bestCentToSplit)#  print('the len of the bestClust')  bestClustAss[np.nonzero(bestClustAss[:,0].A == 1)[0],0] = len(centList)  bestClustAss[np.nonzero(bestClustAss[:,0].A == 0)[0],0] = bestCentToSplit  clusterAssment[np.nonzero(clusterAssment[:,0].A == bestCentToSplit)[0],:] = bestClustAss.copy()  centList[bestCentToSplit] = bestNewCents[0,:].tolist()[0]  centList.append(bestNewCents[1,:].tolist()[0]) return np.mat(centList),clusterAssmentdata = loadDataSet('testSet2.txt')muCentroids, clusterAssing = biKmeans(data,3)fig = plt.figure(0)ax = fig.add_subplot(111)ax.scatter(data[:,0],data[:,1],c = clusterAssing[:,0].A,cmap=plt.cm.Paired)ax.scatter(muCentroids[:,0],muCentroids[:,1])plt.show()print(clusterAssing)print(muCentroids)

代码及数据集下载:K-means

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持武林网。

发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表