Google Research Blog
The latest news from Research at Google
Paper to Digital in 200+ languages
Wednesday, May 06, 2015
Posted by Dmitriy Genzel and Ashok Popat, Research Scientists and Dhyanesh Narayanan, Product Manager
Many of the world’s important sources of information - books, newspapers, magazines, pamphlets, and historical documents - are not digital. Unlike digital documents, these paper-based sources of information are difficult to search through or edit, or worse, completely inaccessible to some people. Part of the solution is
scanning
, getting a digital image of the page, but raw image pixels aren’t yet recognized as textual content from the computer’s point of view.
Optical Character Recognition
(OCR) technology aims to turn pictures of text into computer text that can be indexed, searched, and edited. For some time,
Google Drive
has provided OCR capabilities. Recently, we expanded this state-of-the-art technology to support all of the world’s major languages - that’s over
200 languages
in more than 25 writing systems. This technology is available to users in 2 easy steps:
1. Upload a scanned document in its current form (say, as an image or PDF). The example below shows a scanned document in Hindi uploaded to a user’s Drive account as a PNG.
2. Right-click on the document in the Drive interface, and select ‘Open with’ -> ‘Google Docs’.
This opens a Google document with the original image followed by the extracted text.
You don’t even need to specify which language the document is in; the system will determine that automatically. Or, you can use the
Google Drive API
for more explicit control over the language detection in documents. For example, here is an invocation of the Drive API in Python:
The OCR capability in Drive is also available in the
Drive App for Android
.
To make this possible, engineering teams across Google pursued an approach to OCR focused on broad language coverage, with a goal of designing an architecture that could potentially work with all existing languages and writing systems. We do this in part by using
Hidden Markov Models
(HMMs) to make sense of the input as a whole sequence, rather than first trying to break it apart into pieces. This is similar to how modern
speech recognition systems
recognize audio input.
OCR and speech recognition share some challenges - like dealing with background “noise,” different languages, and low-quality inputs. But some challenges are specific to OCR: the variety of typefaces, the different types of scanners and cameras, and the need to work on older material that may contain archaic
orthographic
and linguistic elements. In addition to utilizing HMMs, we leveraged many of the same technologies used in the
Google Handwriting Input
app to allow automatic learning of features and to give preference to more likely output, as well as
minimum-error-rate training
to allow effective combination of multiple sources of information, and modern methods in machine learning to minimize manual design and maximize use of data. We also take advantage of advances in internationalization and typesetting, by using synthetic data in our training.
Currently, the OCR works best on cleanly scanned, high-resolution documents in the most commonly used typefaces. We are working to improve performance on poor quality scans and challenging text layouts. Give it a try and
let us know
how it works for you.
Labels
accessibility
ACL
ACM
Acoustic Modeling
Adaptive Data Analysis
ads
adsense
adwords
Africa
AI
Android
API
App Engine
App Inventor
April Fools
Art
Audio
Australia
Automatic Speech Recognition
Awards
Cantonese
China
Chrome
Cloud Computing
Collaboration
Computational Photography
Computer Science
Computer Vision
conference
conferences
Conservation
correlate
Course Builder
crowd-sourcing
CVPR
Data Center
data science
datasets
Deep Learning
DeepDream
DeepMind
distributed systems
Diversity
Earth Engine
economics
Education
Electronic Commerce and Algorithms
electronics
EMEA
EMNLP
Encryption
entities
Entity Salience
Environment
Europe
Exacycle
Faculty Institute
Faculty Summit
Flu Trends
Fusion Tables
gamification
Gmail
Google Books
Google Brain
Google Cloud Platform
Google Drive
Google Genomics
Google Science Fair
Google Sheets
Google Translate
Google Voice Search
Google+
Government
grants
Hardware
HCI
Health
High Dynamic Range Imaging
ICLR
ICML
ICSE
Image Annotation
Image Classification
Image Processing
Inbox
Information Retrieval
internationalization
Internet of Things
Interspeech
IPython
Journalism
jsm
jsm2011
K-12
KDD
Klingon
Korean
Labs
Linear Optimization
localization
Machine Hearing
Machine Intelligence
Machine Learning
Machine Perception
Machine Translation
MapReduce
market algorithms
Market Research
ML
MOOC
NAACL
Natural Language Processing
Natural Language Understanding
Network Management
Networks
Neural Networks
Ngram
NIPS
NLP
open source
operating systems
Optical Character Recognition
optimization
osdi
osdi10
patents
ph.d. fellowship
PiLab
Policy
Professional Development
Proposals
Public Data Explorer
publication
Publications
Quantum Computing
renewable energy
Research
Research Awards
resource optimization
Robotics
schema.org
Search
search ads
Security and Privacy
SIGCOMM
SIGMOD
Site Reliability Engineering
Software
Speech
Speech Recognition
statistics
Structured Data
Systems
TensorFlow
Translate
trends
TTS
TV
UI
University Relations
UNIX
User Experience
video
Vision Research
Visiting Faculty
Visualization
VLDB
Voice Search
Wiki
wikipedia
WWW
YouTube
Archive
2016
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2010
Dec
Nov
Oct
Sep
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2009
Dec
Nov
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2008
Dec
Nov
Oct
Sep
Jul
May
Apr
Mar
Feb
2007
Oct
Sep
Aug
Jul
Jun
Feb
2006
Dec
Nov
Sep
Aug
Jul
Jun
Apr
Mar
Feb
Feed
Google
on
Follow @googleresearch
Give us feedback in our
Product Forums
.