<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>DSpace Community: Journals published in CSE</title>
  <link rel="alternate" href="http://103.99.128.19:8080/xmlui/handle/123456789/35" />
  <subtitle>Journals published in CSE</subtitle>
  <id>http://103.99.128.19:8080/xmlui/handle/123456789/35</id>
  <updated>2026-04-19T10:31:44Z</updated>
  <dc:date>2026-04-19T10:31:44Z</dc:date>
  <entry>
    <title>An enhanced method of initial cluster center selection for K-means algorithm</title>
    <link rel="alternate" href="http://103.99.128.19:8080/xmlui/handle/123456789/377" />
    <author>
      <name>Rahman, Zillur</name>
    </author>
    <author>
      <name>Hossain, Sabir</name>
    </author>
    <author>
      <name>Hasan, Mohammad</name>
    </author>
    <author>
      <name>Imteaj, Ahmed</name>
    </author>
    <id>http://103.99.128.19:8080/xmlui/handle/123456789/377</id>
    <updated>2024-01-10T09:30:29Z</updated>
    <published>2021-10-19T00:00:00Z</published>
    <summary type="text">Title: An enhanced method of initial cluster center selection for K-means algorithm
Authors: Rahman, Zillur; Hossain, Sabir; Hasan, Mohammad; Imteaj, Ahmed
Abstract: —Clustering is one of the widely used techniques to&#xD;
find out patterns from a dataset that can be applied in different&#xD;
applications or analyses. K-means, the most popular and simple&#xD;
clustering algorithm, might get trapped into local minima if not&#xD;
properly initialized and the initialization of this algorithm is done&#xD;
randomly. In this paper, we propose a novel approach to improve&#xD;
initial cluster selection for K-means algorithm. This algorithm&#xD;
is based on the fact that the initial centroids must be well&#xD;
separated from each other since the final clusters are separated&#xD;
groups in feature space. The Convex Hull algorithm facilitates&#xD;
the computing of the first two centroids and the remaining ones&#xD;
are selected according to the distance from previously selected&#xD;
centers. To ensure the selection of one center per cluster, we&#xD;
use the nearest neighbor technique. To check the robustness of&#xD;
our proposed algorithm, we consider several real-world datasets.&#xD;
We obtained only 7.33%, 7.90%, and 0% clustering error in&#xD;
Iris, Letter, and Ruspini data respectively which proves better&#xD;
performance than other existing systems. The results indicate&#xD;
that our proposed method outperforms the conventional K means&#xD;
approach by accelerating the computation when the number of&#xD;
clusters is greater than 2.
Description: Conference paper</summary>
    <dc:date>2021-10-19T00:00:00Z</dc:date>
  </entry>
</feed>

