<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="http://103.99.128.19:8080/xmlui/handle/123456789/35">
    <title>DSpace Community: Journals published in CSE</title>
    <link>http://103.99.128.19:8080/xmlui/handle/123456789/35</link>
    <description>Journals published in CSE</description>
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="http://103.99.128.19:8080/xmlui/handle/123456789/377" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-19T10:31:44Z</dc:date>
  </channel>
  <item rdf:about="http://103.99.128.19:8080/xmlui/handle/123456789/377">
    <title>An enhanced method of initial cluster center selection for K-means algorithm</title>
    <link>http://103.99.128.19:8080/xmlui/handle/123456789/377</link>
    <description>Title: An enhanced method of initial cluster center selection for K-means algorithm
Authors: Rahman, Zillur; Hossain, Sabir; Hasan, Mohammad; Imteaj, Ahmed
Abstract: —Clustering is one of the widely used techniques to&#xD;
find out patterns from a dataset that can be applied in different&#xD;
applications or analyses. K-means, the most popular and simple&#xD;
clustering algorithm, might get trapped into local minima if not&#xD;
properly initialized and the initialization of this algorithm is done&#xD;
randomly. In this paper, we propose a novel approach to improve&#xD;
initial cluster selection for K-means algorithm. This algorithm&#xD;
is based on the fact that the initial centroids must be well&#xD;
separated from each other since the final clusters are separated&#xD;
groups in feature space. The Convex Hull algorithm facilitates&#xD;
the computing of the first two centroids and the remaining ones&#xD;
are selected according to the distance from previously selected&#xD;
centers. To ensure the selection of one center per cluster, we&#xD;
use the nearest neighbor technique. To check the robustness of&#xD;
our proposed algorithm, we consider several real-world datasets.&#xD;
We obtained only 7.33%, 7.90%, and 0% clustering error in&#xD;
Iris, Letter, and Ruspini data respectively which proves better&#xD;
performance than other existing systems. The results indicate&#xD;
that our proposed method outperforms the conventional K means&#xD;
approach by accelerating the computation when the number of&#xD;
clusters is greater than 2.
Description: Conference paper</description>
    <dc:date>2021-10-19T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

