dc.contributor.author |
Haque, Md. Mokammel |
|
dc.contributor.author |
Rahman, Mohammad Obaidur |
|
dc.contributor.author |
Pieprzyk, Josef |
|
dc.date.accessioned |
2024-03-12T05:42:03Z |
|
dc.date.available |
2024-03-12T05:42:03Z |
|
dc.date.issued |
2013-11-21 |
|
dc.identifier.uri |
http://103.99.128.19:8080/xmlui/handle/123456789/400 |
|
dc.description.abstract |
BKZ and its variants are considered as the most efficient lattice reduction algorithms compensating both the quality and runtime. Progressive approach (gradually increasing block size) of this algorithm has been attempted in several works for better performance but actual analysis of this approach has never been reported. In this paper, we plot experimental evidence of its complexity over the direct approach. We see that a considerable time saving can be achieved if we use the output basis of the immediately reduced block as the input basis of the current block (with increased block size) successively. Then, we attempt to find pseudo collision in SWIFFT hash function and show that a different set of parameters produces a special shape of Gram-Schmidt norms other than the predicted Geometric Series Assumptions (GSA) which the experiment suggests being more efficient. |
en_US |
dc.description.sponsorship |
IEB. Chittagong |
en_US |
dc.language.iso |
en_US |
en_US |
dc.publisher |
Department of Computer Science and Engineering, CUET |
en_US |
dc.relation.ispartofseries |
NCICIT; |
|
dc.subject |
Lattice reduction |
en_US |
dc.subject |
BKZ |
en_US |
dc.subject |
Gram Schmidt vectors |
en_US |
dc.subject |
SWIFFT |
en_US |
dc.title |
Analysing Progressive-BKZ Lattice Reduction Algorithm |
en_US |
dc.title.alternative |
1st National Conference on Intelligent Computing and Information Technology 2013 |
en_US |
dc.title.alternative |
NCICIT 2013 |
en_US |
dc.type |
Article |
en_US |