计算机学报
計算機學報
계산궤학보
CHINESE JOURNAL OF COMPUTERS
2001年
1期
69-77
,共9页
邱越峰%田增平%季文赟%周傲英
邱越峰%田增平%季文赟%週傲英
구월봉%전증평%계문빈%주오영
信息集成,相似重复记录,N-Gram%Pair-wise,聚类,优先队列
信息集成,相似重複記錄,N-Gram%Pair-wise,聚類,優先隊列
신식집성,상사중복기록,N-Gram%Pair-wise,취류,우선대렬
如何消除数据库中的重复信息是数据质量研究中的一个热门课题.文中提出了一种高效的基于N-Gram的检测相似重复记录的方法,主要工作有:(1)提出了一种高效的基于N-Gram的聚类算法,该算法能适应常见的拼写错误从而较好地聚类相似重复记录,复杂度仅为O(N);同时提出了该算法的改进形式,使其在检测的同时能自动校正单词的插入、删除错误,提高检测精度.(2)采用了一种高效的应用无关的Pair-wise比较算法,该算法以单词间的编辑距离为基础,通过计算两记录中单词间的编辑距离来判断记录的相似与否.(3)给出了一种改进的优先队列算法来准确地聚类相似重复记录,该算法使用固定大小的优先队列顺序扫描已排序的记录,通过比较当前记录和队列中记录的距离来聚类相似重复记录.此外,该文构造了合适的实验环境并作了大量的算法实验.在此基础上,文中分析了大量、翔实的实验结果从而验证了算法的科学性.
如何消除數據庫中的重複信息是數據質量研究中的一箇熱門課題.文中提齣瞭一種高效的基于N-Gram的檢測相似重複記錄的方法,主要工作有:(1)提齣瞭一種高效的基于N-Gram的聚類算法,該算法能適應常見的拼寫錯誤從而較好地聚類相似重複記錄,複雜度僅為O(N);同時提齣瞭該算法的改進形式,使其在檢測的同時能自動校正單詞的插入、刪除錯誤,提高檢測精度.(2)採用瞭一種高效的應用無關的Pair-wise比較算法,該算法以單詞間的編輯距離為基礎,通過計算兩記錄中單詞間的編輯距離來判斷記錄的相似與否.(3)給齣瞭一種改進的優先隊列算法來準確地聚類相似重複記錄,該算法使用固定大小的優先隊列順序掃描已排序的記錄,通過比較噹前記錄和隊列中記錄的距離來聚類相似重複記錄.此外,該文構造瞭閤適的實驗環境併作瞭大量的算法實驗.在此基礎上,文中分析瞭大量、翔實的實驗結果從而驗證瞭算法的科學性.
여하소제수거고중적중복신식시수거질량연구중적일개열문과제.문중제출료일충고효적기우N-Gram적검측상사중복기록적방법,주요공작유:(1)제출료일충고효적기우N-Gram적취류산법,해산법능괄응상견적병사착오종이교호지취류상사중복기록,복잡도부위O(N);동시제출료해산법적개진형식,사기재검측적동시능자동교정단사적삽입、산제착오,제고검측정도.(2)채용료일충고효적응용무관적Pair-wise비교산법,해산법이단사간적편집거리위기출,통과계산량기록중단사간적편집거리래판단기록적상사여부.(3)급출료일충개진적우선대렬산법래준학지취류상사중복기록,해산법사용고정대소적우선대렬순서소묘이배서적기록,통과비교당전기록화대렬중기록적거리래취류상사중복기록.차외,해문구조료합괄적실험배경병작료대량적산법실험.재차기출상,문중분석료대량、상실적실험결과종이험증료산법적과학성.
Eliminating duplications in large databases has drawn greatattention, and it gradually becomes a hot issue in the research of data quality. In this paper, the problem is studied and an efficient N-Gram based approach for detecting approximately duplicate database records is proposed. The contributions of this paper are: (1) An efficient N-Gram based clustering algorithm is proposed, which can tolerate the most common types of spelling mistakes and cluster those approximately duplicated records together. Besides, an improved N-Gram based algorithm is proposed as well, which could not only detect those duplications but also revise most of the insertion and deletion spelling mistakes in the words of records automatically. The advantage is that the N-Gram based algorithm is only with the computing complexity of O(N). (2) A very efficient application independent Pair-Wise comparison algorithm based on the edit distance is exploited. It verifies whether two records are approximately duplicate records via computing the edit distance between each pair of words in the two records. (3) For detecting approximately duplicate records, an improved algorithm that employs the priority queue is presented. It uses a priority queue to scan all sorted records sequentially, and makes those approximately duplicate records cluster together through comparing the distance between current record and the corresponding record in the priority queue. Furthermore, an effective experimental environment is set up and a lot of algorithm tests are carried out. Plenty of results are produced through a great deal of different actual experiments. The corresponding aborative analysis is also presented here. Based on all of the experiments and analysis in this paper, the efficiency, rationality and scientificity of the N-Gram based approximately duplicate record detecting approach is validated.