Tuesday, August 9, 2022
HomeWeb DevelopmentThe right way to construct a bank card scanner with React Native

The right way to construct a bank card scanner with React Native


In cellular functions, customers often make purchases by coming into their bank card particulars. We’ve seemingly all had the irritating expertise of coming into these 16 digits manually into our smartphones.

Many functions are including automation to simplify this course of. Due to this fact, to enter cost particulars, customers can both take a photograph of their bank card, or add a photograph from their gadget’s picture gallery. Cool, proper?

On this article, we’ll discover ways to implement an analogous perform in a React Native app utilizing the Textual content Recognition API, an ML kit-based API that may acknowledge any latin-based character set. We‘ll use the on-device textual content recognition API and canopy the next:

You will discover the entire code for this tutorial on this GitHub repository. Our ultimate UI will seem like the gifs under:

Final Credit Card Scan UI

 

Credit Card Scan UI Upload

 

Creating a brand new React Native mission

First, we’ll create a brand new React Native mission. If you wish to add the bank card scanning function to your present mission, be happy to skip this half.

In your most popular folder listing, run the command under in your terminal to create a brand new React Native mission:


Extra nice articles from LogRocket:


npx react-native init <project-name> --template react-native-template-typescript

Open the mission in your most popular IDE and change the code within the App.tsx file with the code under:

import React from 'react';
import {SafeAreaView, Textual content} from 'react-native';

const App: React.FC = () => {
  return (
    <SafeAreaView>
      <Textual content>Credit score Card Scanner</Textual content>
    </SafeAreaView>
  );
};

export default App;

Now, let’s run the app. For iOS, we have to set up pods earlier than constructing the mission:

cd ios && pod set up && cd ..

Then, we are able to construct the iOS mission:

yarn ios

For Android, we are able to immediately construct the mission:

yarn android

The step above will begin Metro server in addition to iOS and Android simulators, then run the app on them. Presently, with the code above in App.tsx, we’ll see solely a clean display with textual content studying Credit score Card Scanner.

What’s react-native-cardscan?

react-native-cardscan is a wrapper library round CardScan, a minimalistic library for scanning debit and bank cards. react-native-cardscan offers easy plug and play utilization for bank card scanning in React Native functions. Nonetheless, on the time of writing, react-native-cardscan is now not maintained and is deprecated to make use of. Stripe is integrating react-native-cardscan into its personal cost options for cellular functions. You possibly can try the new repository on GitHub, nonetheless, it’s nonetheless beneath improvement on the time of writing.

Since this library is deprecated, we’ll create our personal customized bank card scanning logic with react-native-text-recognition.

Integrating react-native-text-recognition

react-native-text-recognition is a wrapper library constructed across the Imaginative and prescient framework on iOS and Firebase ML on Android. In case you are implementing card scanning in a manufacturing software, I’d counsel that you just create your personal native module for textual content recognition. Nonetheless, for the simplicity of this tutorial, I’ll use this library.

Let’s write the code to scan bank cards. Earlier than we combine textual content recognition, let’s add the opposite helper libraries we’ll want, react-native-vision-camera and react-native-image-crop-picker. We’ll use these libraries to seize a photograph from our gadget’s digital camera and decide photographs from the cellphone gallery, respectively:

yarn add react-native-image-crop-picker react-native-vision-camera
// Set up pods for iOS
cd ios && pod set up && cd ..

In case you are on React Native ≥v0.69, react-native-vision-camera won’t construct on account of modifications made in newer architectures. Please comply with the modifications on this PR for the decision.

Now that our helper dependencies are put in, let’s set up react-native-text-recognition:

yarn add react-native-text-recognition
pod set up

With our dependencies arrange, let’s begin writing our code. Add the code under to App.tsx to implement picture choosing performance:

....
<key>NSPhotoLibraryUsageDescription</key>
<string>Permit Entry to Photograph Library</string>
....

We additionally want permission to entry a person’s picture gallery on iOS. For that, add the code under to your iOS mission’s data.plist file:

import React from 'react';
import {SafeAreaView, Textual content, StatusBar, Pressable} from 'react-native';
import ImagePicker, {ImageOrVideo} from 'react-native-image-crop-picker';

const App: React.FC = () => {

  const pickAndRecognize: () => void = useCallback(async () => {
    ImagePicker.openPicker({
      cropping: false,
    })
      .then(async (res: ImageOrVideo) => {
        console.log('res:', res);
      })
      .catch(err => {
        console.log('err:', err);
      });
  }, []);

  return (
    <SafeAreaView model={kinds.container}>
      <StatusBar barStyle={'dark-content'} />
      <Textual content model={kinds.title}>Credit score Card Scanner</Textual content>
      <Pressable model={kinds.galleryBtn} onPress={pickAndRecognize}>
        <Textual content model={kinds.btnText}>Choose from Gallery</Textual content>
      </Pressable>
    </SafeAreaView>
  );
};

const kinds = StyleSheet.create({
  container: {
    flex: 1,
    alignItems: 'heart',
    backgroundColor: '#fff',
  },
  title: {
    fontSize: 20,
    fontWeight: '700',
    shade: '#111',
    letterSpacing: 0.6,
    marginTop: 18,
  },
  galleryBtn: {
    paddingVertical: 14,
    paddingHorizontal: 24,
    backgroundColor: '#000',
    borderRadius: 40,
    marginTop: 18,
  },
  btnText: {
    fontSize: 16,
    shade: '#fff',
    fontWeight: '400',
    letterSpacing: 0.4,
  },
});

export default App;

Within the code above, we’ve added some textual content and kinds to our view, however the primary half is the place we declared the pickAndRecognize perform. Understand that we’re not doing something associated to recognizing on this perform; we named it this as a result of we’ll add the Textual content Recognition logic on this perform later.

For now, the output of the code above will seem like the next:

Implement Image Picking Functionality

Now, we’re capable of decide photographs from our person’s picture gallery. Let’s additionally add the performance for capturing a picture from the digital camera and viewing the digital camera preview on the UI.

Add the code under to your App.tsx return assertion:

// Export the asset from a file like this or immediately use it.
import {Seize} from './property/icons'; 
....
<SafeAreaView model={kinds.container}>
      ....
      {gadget && hasPermissions ? (
        <View>
          <Digital camera
            picture
            enableHighQualityPhotos
            ref={digital camera}
            model={kinds.digital camera}
            isActive={true}
            gadget={gadget}
          />
          <Pressable
            model={kinds.captureBtnContainer}
            // We are going to outline this technique later
            onPress={captureAndRecognize}>
            <Picture supply={Seize} />
          </Pressable>
        </View>
      ) : (
        <Textual content>No Digital camera Discovered</Textual content>
      )}
</SafeAreaView>

Add the related kinds:

const kinds = StyleSheet.create({
....
  digital camera: {
    marginVertical: 24,
    top: 240,
    width: 360,
    borderRadius: 20,
    borderWidth: 2,
    borderColor: '#000',
  },
  captureBtnContainer: {
    place: 'absolute',
    backside: 28,
    proper: 10,
  },
....
});

We additionally have to create some state variables and refs:

const App: React.FC = () => {
  const digital camera = useRef<Digital camera>(null);
  const units = useCameraDevices();
  let gadget: any = units.again;
  const [hasPermissions, setHasPermissions] = useState<boolean>(false);
  ....
}

We’re displaying an Picture on the digital camera view to pick out photos. You possibly can go forward and obtain the asset.

Earlier than we preview the digital camera, we have to add permissions to entry our gadget’s digital camera. To take action, add the strings under to your iOS mission’s data.plist file:

....
<key>NSCameraUsageDescription</key>
<string>Permit Entry to Digital camera</string>
....

For Android, add the code under to your AndroidManifest.xml file:

....
    <uses-permission android:title="android.permission.CAMERA"/>
....

When our app is loaded, we have to request permissions from the person. Let’s write the code to do this. Add the code under to your App.tsx file:

....
  useEffect(() => {
    (async () => {
      const cameraPermission: CameraPermissionRequestResult =
        await Digital camera.requestCameraPermission();
      const microPhonePermission: CameraPermissionRequestResult =
        await Digital camera.requestMicrophonePermission();
      if (cameraPermission === 'denied' || microPhonePermission === 'denied') {
        Alert.alert(
          'Permit Permissions',
          'Please permit digital camera and microphone permission to entry digital camera options',
          [
            {
              text: 'Go to Settings',
              onPress: () => Linking.openSettings(),
            },
            {
              text: 'Cancel',
            },
          ],
        );
        setHasPermissions(false);
      } else {
        setHasPermissions(true);
      }
    })();
  }, []);
....

Now, now we have a working digital camera view in our app UI:

Working Camera View App UI

Now that our digital camera view is working, let’s add the code to seize a picture from the digital camera.

Keep in mind the captureAndRecognize technique we wish to set off from our seize button? Let’s outline it now. Add the strategy declaration under in App.tsx:

....
  const captureAndRecognize = useCallback(async () => {
    strive {
      const picture = await digital camera.present?.takePhoto({
        qualityPrioritization: 'high quality',
        enableAutoStabilization: true,
        flash: 'on',
        skipMetadata: true,
      });
      console.log('picture:', picture);
    } catch (err) {
      console.log('err:', err);
    }
  }, []);
....

Just like the pickAndRecognize technique, we haven’t but added the bank card recognition logic on this technique. We’ll accomplish that within the subsequent step.

Writing card quantity recognition logic

We are actually capable of get photographs from each our gadget’s picture gallery and digital camera. Now, we have to write the logic, which is able to do the next:

  • Take a picture as an enter and return an array of string for all of the textual content acknowledged in that picture
  • Iterate by means of the strings returned within the array and verify if every merchandise is a possible bank card quantity
  • If we discover a bank card quantity, we’ll return that string and the show
  • If we don’t discover any matching strings, then we show an error displaying No Legitimate Credit score Card Discovered

The steps are very simple. Let’s write the strategy:

const findCardNumberInArray: (arr: string[]) => string = arr => {
  let creditCardNumber="";
  arr.forEach(e => {
    let numericValues = e.change(/D/g, '');
    const creditCardRegex =
      /^(?:4[0-9]{12}(?:[0-9]{3})?|[25][1-7][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35d{3})d{11})$/;
    if (creditCardRegex.take a look at(numericValues)) {
      creditCardNumber = numericValues;
      return;
    }
  });
  return creditCardNumber;
};

const validateCard: (outcome: string[]) => void = outcome => {
    const cardNumber = findCardNumberInArray(outcome);
    if (cardNumber?.size) {
      setProcessedText(cardNumber);
      setCardIsFound(true);
    } else {
      setProcessedText('No legitimate Credit score Card discovered, please strive once more!!');
      setCardIsFound(false);
    }
};

Within the code above, now we have written two strategies, validateCard and findCardNumberInArray. The validateCard technique takes one argument of a string[] or an array of strings. Then, it passes that array to the findCardNumberInArray technique. If a bank card quantity string is discovered within the array, this technique returns it. If not, it returns an empty string.

Then, we verify if now we have a string within the cardNumber variable. If that’s the case, we set some state variables, in any other case, we set the state variables to point out an error.

Let’s see how the findCardNumberInArray technique works. This technique additionally takes one argument of a string[]. Then, it loops by means of the every aspect within the array and strips all of the non-numeric values from the string. Lastly, it checks the string with a regex, which checks if a string is a sound bank card quantity.

If the string matches the regex, then we return that string as bank card quantity from the strategy. If no string matches the regex, then we return an empty string.

You’ll additionally discover that now we have not but declared these new state variables in our code. Let’s do this now. Add the code under to your App.tsx file:

....
  const [processedText, setProcessedText] = React.useState<string>(
    'Scan a Card to seenCard Quantity right here',
  );
  const [isProcessingText, setIsProcessingText] = useState<boolean>(false);
  const [cardIsFound, setCardIsFound] = useState<boolean>(false);
....

Now, we simply have to plug the validateCard technique in our code. Edit your pickAndRecognize and captureAndRecognize strategies with the respective codes under:

....
  const pickAndRecognize: () => void = useCallback(async () => {
    ....
      .then(async (res: ImageOrVideo) => {
        setIsProcessingText(true);
        const outcome: string[] = await TextRecognition.acknowledge(res?.path);
        setIsProcessingText(false);
        validateCard(outcome);
      })
      .catch(err => {
        console.log('err:', err);
        setIsProcessingText(false);
      });
  }, []);

  const captureAndRecognize = useCallback(async () => {
    ....
      setIsProcessingText(true);
      const outcome: string[] = await TextRecognition.acknowledge(
        picture?.path as string,
      );
      setIsProcessingText(false);
      validateCard(outcome);
    } catch (err) {
      console.log('err:', err);
      setIsProcessingText(false);
    }
  }, []);
....

With that, we’re carried out! We simply want to point out the output in our UI. To take action, add the code under to your App.tsx return assertion:

....
      {isProcessingText ? (
        <ActivityIndicator
          measurement={'massive'}
          model={kinds.activityIndicator}
          shade={'blue'}
        />
      ) : cardIsFound ? (
        <Textual content model={kinds.creditCardNo}>
          {getFormattedCreditCardNumber(processedText)}
        </Textual content>
      ) : (
        <Textual content model={kinds.errorText}>{processedText}</Textual content>
      )}
....

The code above shows an ActivityIndicator or loader if there’s some textual content processing occurring. If now we have discovered a bank card, then it shows it as a textual content. You’ll additionally discover we’re utilizing a getFormattedCreditCardNumber technique to render the textual content. We’ll write it subsequent. If all circumstances are false, then it means now we have an error, so we present textual content with error kinds.

Let’s declare the getFormattedCreditCardNumber technique now. Add the code under to your App.tsx file:

....

const getFormattedCreditCardNumber: (cardNo: string) => string = cardNo => {
  let formattedCardNo = '';
  for (let i = 0; i < cardNo?.size; i++) {
    if (i % 4 === 0 && i !== 0) {
      formattedCardNo += ` • ${cardNo?.[i]}`;
      proceed;
    }
    formattedCardNo += cardNo?.[i];
  }
  return formattedCardNo;
};
....

The strategy above takes one argument, cardNo, which is a string. Then, it iterates by means of cardNo and inserts a character after each 4 letters. That is only a utility perform to format the bank card quantity string.

Our finish output UI will seem like the next:

Final UI Output Credit Card Scanner

Conclusion

On this article, we discovered find out how to enhance our cellular functions by including a bank card scanning function. Utilizing the react-native-text-recognition library, we arrange our software to seize a photograph from our gadget’s digital camera and decide photographs from the cellphone gallery, recognizing a 16 digit bank card quantity.

Textual content recognition isn’t just restricted to bank card scanning. You could possibly use it to resolve many different enterprise issues, like automating information entry for particular duties like receipts, enterprise playing cards, and rather more! Thanks for studying. I hope you loved this text, and you should definitely depart a remark when you’ve got any questions.

LogRocket: Immediately recreate points in your React Native apps.

LogRocket is a React Native monitoring answer that helps you reproduce points immediately, prioritize bugs, and perceive efficiency in your React Native apps.

LogRocket additionally helps you improve conversion charges and product utilization by displaying you precisely how customers are interacting together with your app. LogRocket’s product analytics options floor the explanation why customers do not full a selected movement or do not undertake a brand new function.

Begin proactively monitoring your React Native apps — .

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments