Abstract
Modern machine translation systems are trained on large volumes of parallel data obtained using heuristic methods of the Internet bypassing. The poor quality of the data leads to systematic translation errors, which can be quite noticeable from the human point of view. To fix such errors a human based models hypotheses re-ranking is introduced in this work. In this paper the use of human markup is shown not only to increase the overall quality of translation, but also to significantly reduce the number of systematic translation errors. In addition, the relative simplicity of human markup and its integration in the model training process opens up new opportunities in the field of domain adaptation of translation models for new domains like online retail.