<?xml version="1.0" encoding="UTF-8"?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>Journals in CSE</title>
<link href="http://103.99.128.19:8080/xmlui/handle/123456789/38" rel="alternate"/>
<subtitle>Journals published in CSE</subtitle>
<id>http://103.99.128.19:8080/xmlui/handle/123456789/38</id>
<updated>2026-04-19T10:36:52Z</updated>
<dc:date>2026-04-19T10:36:52Z</dc:date>
<entry>
<title>An enhanced method of initial cluster center selection for K-means algorithm</title>
<link href="http://103.99.128.19:8080/xmlui/handle/123456789/377" rel="alternate"/>
<author>
<name>Rahman, Zillur</name>
</author>
<author>
<name>Hossain, Sabir</name>
</author>
<author>
<name>Hasan, Mohammad</name>
</author>
<author>
<name>Imteaj, Ahmed</name>
</author>
<id>http://103.99.128.19:8080/xmlui/handle/123456789/377</id>
<updated>2024-01-10T09:30:29Z</updated>
<published>2021-10-19T00:00:00Z</published>
<summary type="text">An enhanced method of initial cluster center selection for K-means algorithm
Rahman, Zillur; Hossain, Sabir; Hasan, Mohammad; Imteaj, Ahmed
—Clustering is one of the widely used techniques to&#13;
find out patterns from a dataset that can be applied in different&#13;
applications or analyses. K-means, the most popular and simple&#13;
clustering algorithm, might get trapped into local minima if not&#13;
properly initialized and the initialization of this algorithm is done&#13;
randomly. In this paper, we propose a novel approach to improve&#13;
initial cluster selection for K-means algorithm. This algorithm&#13;
is based on the fact that the initial centroids must be well&#13;
separated from each other since the final clusters are separated&#13;
groups in feature space. The Convex Hull algorithm facilitates&#13;
the computing of the first two centroids and the remaining ones&#13;
are selected according to the distance from previously selected&#13;
centers. To ensure the selection of one center per cluster, we&#13;
use the nearest neighbor technique. To check the robustness of&#13;
our proposed algorithm, we consider several real-world datasets.&#13;
We obtained only 7.33%, 7.90%, and 0% clustering error in&#13;
Iris, Letter, and Ruspini data respectively which proves better&#13;
performance than other existing systems. The results indicate&#13;
that our proposed method outperforms the conventional K means&#13;
approach by accelerating the computation when the number of&#13;
clusters is greater than 2.
Conference paper
</summary>
<dc:date>2021-10-19T00:00:00Z</dc:date>
</entry>
</feed>
