我想使用字符串相似性函数来找到我的数据库中损坏的数据。
我来了几个:
首页>
> Jaro-Winkler,
> Levenshtein,
>欧几里得和
> Q-gram,
我想知道他们之间有什么区别,在什么情况下他们最好的工作?
扩展我的wiki-walk评论在勘误表和
noting some of the ground-floor literature on the comparability of algorithms that apply to similar problem spaces,让我们探索这些算法的适用性,我们确定之前他们是数值比较。
维基百科,Jaro-Winkler:
In computer science and statistics,the Jaro–Winkler distance
(Winkler,1990) is a measure of similarity between two strings. It is
a variant of the Jaro distance metric (Jaro,1989,1995) and
mainly[citation needed] used in the area of record linkage (duplicate
detection). The higher the Jaro–Winkler distance for two strings is,
the more similar the strings are. The Jaro–Winkler distance metric is
designed and best suited for short strings such as person names. The
score is normalized such that 0 equates to no similarity and 1 is an
exact match.
Levenshtein distance:
In information theory and computer science,the Levenshtein distance
is a string metric for measuring the amount of difference between two
sequences. The term edit distance is often used to refer specifically
to Levenshtein distance.
The Levenshtein distance between two strings is defined as the minimum
number of edits needed to transform one string into the other,with
the allowable edit operations being insertion,deletion,or
substitution of a single character. It is named after Vladimir
Levenshtein,who considered this distance in 1965.
Euclidean distance:
In mathematics,the Euclidean distance or Euclidean metric is the
“ordinary” distance between two points that one would measure with a
ruler,and is given by the Pythagorean formula. By using this formula
as distance,Euclidean space (or even any inner product space) becomes
a metric space. The associated norm is called the Euclidean norm.
Older literature refers to the metric as Pythagorean metric.
和Q- or n-gram encoding:
In the fields of computational linguistics and probability,an n-gram
is a contiguous sequence of n items from a given sequence of text or
speech. The items in question can be phonemes,syllables,letters,
words or base pairs according to the application. n-grams are
collected from a text or speech corpus.
The two core
advantages of n-gram models (and algorithms that use
them) are relative simplicity and the ability to scale up – by simply
increasing n a model can be used to store more context with a
well-understood space–time tradeoff,enabling small experiments to
scale up very efficiently.
麻烦的是这些算法解决不同的问题,在所有可能的算法的空间内具有不同的适用性来解决longest common subsequence问题,在您的数据或嫁接一个可用的metric。事实上,并非所有这些都是偶数度量,因为其中一些不满足triangle inequality。
而不是走出你的方式来定义一个可疑的方案来检测数据损坏,这样做正确:通过使用checksums和parity bits为您的数据。不要试图解决一个更难的问题,当一个更简单的解决方案。