"Distance-based classifier" is a pretty ambiguous term. I will assume for this answer that you are referring to a classifier basing it's decision on the distance calculated from the target instance to the training instances, for example the k-Nearest Neighbors algorithm.
In k-Nearest Neighbors, you determine the nearest k training instances to your target instance. Figuring out which k are the nearest involves calculating some sort of distance function. If all the input features are normalized to real values between 0 and 1, this is usually a rather simple Euclidean distance. These neighbors then vote on classification of the target. So the target is given a classification based only on the nearby instances, and anything farther away is ignored. More complex distance based classifiers might use more or all of the training instances, but inversely weight them by their distance. These tend to be some of the most simple types of machine learning classifiers.
SVM on the other hand, attempts to find a hyper-plane separating the different classes of the training instances, with the maximum error margin. Basically, it's looking to create a fence between the two classes, letting as few instances of a class to be on the wrong side of the fence as possible, and with the largest possible "no-man's land" between the two sides (or N sides for non-binary classification). This is why SVMs are also called as large margin classifiers since they create a large margin between your data points and the decision boundary (or the hyper-plane). The most important training instances are the ones that are making up the boundary. Making the tradeoff between how many instances to allow on the wrong side and how complex the boundary is controls how complex the model is. The models are generally much more complex then the distance based classifiers, but take more information into account when classifying the target instance and will generally have much better accuracy.