Sunday, October 23, 2022
HomeData ScienceNice-Tuning OCR-Free Donut Mannequin for Bill Recognition | by Walid Amamou |...

Nice-Tuning OCR-Free Donut Mannequin for Bill Recognition | by Walid Amamou | Oct, 2022


And evaluating its efficiency to layoutLM

Picture by Vladdeep from Envanto

Clever doc processing (IDP) is the flexibility to mechanically perceive the content material and construction of paperwork. It is a essential functionality for any group that should course of a lot of paperwork, reminiscent of for customer support, claims processing, or compliance. Nevertheless, IDP shouldn’t be a trivial process. Even for the most typical doc varieties, reminiscent of invoices or resumes, the number of codecs and layouts that exist could make it very tough for IDP software program to precisely interpret the content material.

Present doc understanding fashions, reminiscent of layoutLM, will typically require OCR processing to extract the textual content from paperwork earlier than they are often processed. Whereas OCR might be an efficient solution to extract textual content from paperwork, it isn’t with out its challenges. OCR accuracy might be impacted by components reminiscent of the standard of the unique doc, the font used, and the readability of the textual content. Moreover, OCR is gradual and computational intensive which provides one other layer of complexity. This will make it tough to realize the excessive degree of accuracy wanted for IDP. To beat these challenges, new approaches to IDP are wanted that may precisely interpret paperwork with out the necessity for OCR.

Enter Donut, which stands for Document Understanding Transformer, an OCR-free transformer mannequin that achieved state-of-the-art efficiency beating even the layoutLM mannequin by way of accuracy in response to the authentic paper.

On this tutorial, we’re going to fine-tune the brand new Donut mannequin for bill extraction and examine its efficiency to the newest layoutLM v3. Let’s get began!

For reference, beneath is google colab script to fine-tune the Donut mannequin:

So how is the mannequin in a position to extract textual content and perceive photographs with out requiring any OCR processing? Donut structure relies on a visible encoder and a textual content decoder. The visible encoder takes as enter visible options x∈R H×W×C right into a set of embeddings zi∈R d , 1≤i≤n, the place n is characteristic map dimension or the variety of picture patches and d is the dimension of the latent vectors of the encoder. The authors used Swin transformer as encoder as a result of it reveals the perfect efficiency primarily based on their preliminary examine. The textual content decoder is a BART transformer mannequin that maps the enter options right into a sequence of subwords tokens.

Donut makes use of teacher-forcing technique mannequin which makes use of floor reality within the enter as a substitute of the output. The mannequin generate a sequence of tokens primarily based on a immediate that depends upon the kind of process we want to obtain reminiscent of classification, question-answering and parsing. For instance, if we need to extract the category of the doc, we are going to feed the picture embedding to the decoder together with the kind of process and the mannequin will output a textual content sequence comparable to the kind of doc. If we’re inquisitive about question-answering, we are going to enter the query “what’s the worth of choco mochi” and the mannequin will output the reply. The output sequence is then transformed to a JSON file. For extra data, check with the authentic article.

Donut structure. Supply

On this tutorial, we’re going to fine-tune the mannequin on 220 invoices that have been labeled utilizing the UBIAI Textual content Annotation instrument, just like my earlier articles on fine-tuning the layoutLM fashions. Right here is an instance that reveals the format of the labeled dataset exported from UBIAI.

Picture by Creator: UBIAI OCR annotation characteristic

UBIAI helps OCR parsing, native PDF/picture annotation and export in the proper format. You’ll be able to fine-tune the layouLM mannequin proper within the UBIAI platform and auto-label your knowledge with it, which may save plenty of handbook annotation time.

Step one is to import the wanted packages and clone the Donut repo from Github.

from PIL import Pictureimport torch!git clone https://github.com/clovaai/donut.git!cd donut && pip set up .from donut import DonutModelimport jsonimport shutil

Subsequent, we have to extract the labels and parse the picture names from the JSON file exported from UBIAI. We put the trail of the labeled dataset and the processed folder (exchange with your personal path).

ubiai_data_folder = "/content material/drive/MyDrive/Colab Notebooks/UBIAI_dataset"ubiai_ocr_results = "/content material/drive/MyDrive/Colab Notebooks/UBIAI_dataset/ocr.json"processed_dataset_folder = "/content material/drive/MyDrive/Colab Notebooks/UBIAI_dataset/processed_dataset"with open(ubiai_ocr_results) as f:  knowledge = json.load(f)#Extract labels from the JSON file
all_labels = checklist()
for j in knowledge:
all_labels += checklist(j['annotation'][cc]['label'] for cc in vary(len(j['annotation'])))
all_labels = set(all_labels)
all_labels
#Setup picture path
images_metadata = checklist()
images_path = checklist()
for obs in knowledge:ground_truth = dict()
for ann in obs['annotation']:
if ann['label'].strip() in ['SELLER', 'DATE', 'TTC', 'INVOICE_NUMBERS', 'TVA']:
ground_truth[ann['label'].strip()] = ann['text'].strip()
strive:
ground_truth = {key : ground_truth[key] for key in ['SELLER', 'DATE', 'TTC', 'INVOICE_NUMBERS', 'TVA']}
besides:
proceed

images_metadata.append({"gt_parse": ground_truth})
images_path.append(obs['images'][0]['name'].exchange(':',''))

dataset_len = len(images_metadata)

We cut up the information into coaching, check and validation set. To take action, merely create three folders prepare, check and validation. Inside every folder create an empty metadata.jsonl file and run the script beneath:

for i, gt_parse in enumerate(images_metadata):
# prepare
if i < spherical(dataset_len*0.8) :
with open(processed_dataset_folder+"/prepare/metadata.jsonl", 'a') as f:
line = {"file_name": images_path[i], "ground_truth": json.dumps(gt_parse, ensure_ascii=False)}
f.write(json.dumps(line, ensure_ascii=False) + "n")
shutil.copyfile(ubiai_data_folder + '/' + images_path[i], processed_dataset_folder + "/prepare/" + images_path[i])
if images_path[i] == "050320sasdoodahfev20_2021-09-24_0722.txt_image_0.jpg":
print('prepare')

# check
if spherical(dataset_len*0.8) <= i < spherical(dataset_len*0.8) + spherical(dataset_len*0.1):
with open(processed_dataset_folder+"/check/metadata.jsonl", 'a') as f:
line = {"file_name": images_path[i], "ground_truth": json.dumps(gt_parse, ensure_ascii=False)}
f.write(json.dumps(line, ensure_ascii=False) + "n")
shutil.copyfile(ubiai_data_folder + '/' + images_path[i], processed_dataset_folder + "/check/" + images_path[i])
if images_path[i] == "050320sasdoodahfev20_2021-09-24_0722.txt_image_0.jpg":
print('check')

# validation
if spherical(dataset_len*0.8) + spherical(dataset_len*0.1) <= i < dataset_len:
with open(processed_dataset_folder+"/validation/metadata.jsonl", 'a') as f:
line = {"file_name": images_path[i], "ground_truth": json.dumps(gt_parse, ensure_ascii=False)}
f.write(json.dumps(line, ensure_ascii=False) + "n")
shutil.copyfile(ubiai_data_folder + '/' + images_path[i], processed_dataset_folder + "/validation/" + images_path[i])

The script will convert our authentic annotations into JSON format containing the picture path and the bottom reality:

{"file_name": "156260522812_2021-10-26_195802.2.txt_image_0.jpg", "ground_truth": "{"gt_parse": {"SELLER": "TJF", "DATE": "création-09/05/2019", "TTC": "73,50 €", "INVOICE_NUMBERS": "N° 2019/068", "TVA": "12,25 €"}}"}{"file_name": "156275474651_2021-10-26_195807.3.txt_image_0.jpg", "ground_truth": "{"gt_parse": {"SELLER": "SAS CALIFRAIS", "DATE": "20/05/2019", "TTC": "108.62", "INVOICE_NUMBERS": "7133", "TVA": "5.66"}}"}

Subsequent, go to “/content material/donut/config” folder, create a brand new file known as “prepare.yaml” and duplicate the next config content material (ensure that to exchange the dataset path by your personal path):

result_path: "/content material/drive/MyDrive/Colab Notebooks/UBIAI_dataset/processed_dataset/outcome"
pretrained_model_name_or_path: "naver-clova-ix/donut-base" # loading a pre-trained mannequin (from moldehub or path)
dataset_name_or_paths: ["/content/drive/MyDrive/Colab Notebooks/UBIAI_dataset/processed_dataset"] # loading datasets (from moldehub or path)
sort_json_key: False # wire dataset is preprocessed, and publicly accessible at https://huggingface.co/datasets/naver-clova-ix/cord-v2
train_batch_sizes: [1]
val_batch_sizes: [1]
input_size: [1280, 960] # when the enter decision differs from the pre-training setting, some weights might be newly initialized (however the mannequin coaching could be okay)
max_length: 768
align_long_axis: False
num_nodes: 1
seed: 2022
lr: 3e-5
warmup_steps: 300 # 800/8*30/10, 10%
num_training_samples_per_epoch: 800
max_epochs: 50
max_steps: -1
num_workers: 8
val_check_interval: 1.0
check_val_every_n_epoch: 3
gradient_clip_val: 1.0
verbose: True

Word which you can replace the hyperparameters primarily based by yourself use case.

We’re lastly prepared to coach the mannequin, merely run the command beneath:

!cd donut && python prepare.py --config config/prepare.yaml
Picture by Creator: Donut Mannequin Coaching

The mannequin coaching will take about 1.5 hour utilizing google colab with GPU enabled.

To get the mannequin efficiency, we check the mannequin on a check dataset and examine its prediction to the bottom reality:

import glob
with open('/content material/drive/MyDrive/Bill dataset/UBIAI_dataset/processed_dataset/check/metadata.jsonl') as f:
outcome = [json.loads(jline) for jline in f.read().splitlines()]
test_images = glob.glob(processed_dataset_folder+'/check/*.jpg')
acc_dict = {'SELLER' : 0, 'DATE' : 0, 'TTC' : 0, 'INVOICE_NUMBERS' : 0, 'TVA' : 0}for path in test_images:
picture = Picture.open(path).convert("RGB")

donut_result = my_model.inference(picture=picture, immediate="<s_ubiai-donut>")
returned_labels = donut_result['predictions'][0].keys()

for i in outcome:
if i['file_name'] == path[path.index('/test/')+6:]:
reality = json.hundreds(i['ground_truth'])['gt_parse']
break
for l in [x for x in returned_labels if x in ['SELLER', 'DATE', 'TTC', 'INVOICE_NUMBERS', 'TVA']]:
if donut_result['predictions'][0][l] == reality[l]:
acc_dict[l] +=1

Right here is the rating per entity:

SELLER: 0% , DATE: 47%, TTC: 74%, INVOICE_NUMBERS: 53%, TVA: 63%

Though there have been sufficient examples (274), the SELLER entity had a rating of 0. The remainder of the entities had larger scores however have been nonetheless within the decrease vary. Now let’s strive operating the mannequin on a brand new bill that wasn’t a part of the coaching dataset.

Picture by Creator: Check bill

The mannequin predictions are:

DATE’: ‘31/01/2017’,

‘TTC’: ‘$1,455.00’,

‘INVOICE_NUMBERS’: ‘INVOICE’,

‘TVA’: ‘$35.00’

The mannequin had bother extracting the vendor identify and bill quantity, however it accurately acknowledged the Complete Worth (TTC), the date, and mislabeled the tax (TVA). Though the mannequin’s efficiency is comparatively low, we are able to strive some hyperparameter tuning to boost it and/or label extra knowledge.

The Donut mannequin has a number of benefits over its counter half layoutLM, reminiscent of decrease computational value, decrease processing time, and fewer error as a consequence of OCR. However how does the efficiency examine? In keeping with the unique paper, the Donut mannequin performs higher than layoutLM on the CORD dataset.

Mannequin efficiency rating comparability

Nevertheless, we haven’t seen a efficiency improve when utilizing our personal labeled dataset. If something, LayoutLM has been in a position to seize extra entities reminiscent of the vendor identify and bill quantity. This discrepancy could possibly be as a consequence of the truth that we haven’t completed any hyperparameter tuning. Alternatively, it’s potential that Donut requires extra labelled knowledge to realize good efficiency.

On this tutorial, we’ve got centered on knowledge extraction however the Donut mannequin is able to doc classification, doc query answering and artificial knowledge technology, so we’ve got solely scratched the floor. The OCR-free mannequin presents many benefits reminiscent of larger velocity processing, decrease complexity, much less error propagation as a consequence of low high quality OCR.

As a subsequent step, we are able to enhance the mannequin efficiency by performing hyperparameter tuning and labeling extra knowledge.

In case you are to label your personal coaching dataset, don’t hesitate to check out UBIAI OCR annotation characteristic right here without cost.

Comply with us on Twitter @UBIAI5 or subscribe right here!



RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments