Tuesday, October 4, 2022
HomeWeb DevelopmentThe right way to construct a transcription app in React Native

The right way to construct a transcription app in React Native


When utilizing instruments like Amazon’s Alexa or Apple’s Siri, you may need puzzled how they perceive what you say and the way you sound. Basically, your spoken phrases are remodeled into textual content and fed right into a system that gives a response because of this.

Just lately, I’ve been exploring speech recognition in native cell apps. Based mostly alone expertise, React Native Voice is probably the most accessible library for making a React Native transcription app. Nevertheless, when you’re unfamiliar with speech recognition or with React Native, it may be fairly tough to configure the app correctly.

On this tutorial, I’ll stroll you thru making a easy transcription app in React Native utilizing the React Native Voice library. Our React Native transcription app will permit customers to report audio after which transcribe it into textual content. Let’s get began!

Desk of contents

Why use React Native on your transcription app?

React Native is a JavaScript framework that permits you to create native apps for each iOS and Android, saving you money and time through the use of the identical code for each platforms.

React Native presents an a variety of benefits over different frameworks, together with a smaller file measurement, quicker efficiency, and higher assist for third-party libraries. As well as, React Native is open supply, which means there’s a giant neighborhood of builders who can contribute to the challenge and assist enhance it. Altogether, this makes React Native an excellent selection for constructing our transcription app.

React Native Voice strategies

React Native Voice features a plethora of useful event-triggered strategies for dealing with speech in your app.

  • onSpeechStart: Triggered when the app acknowledges that somebody has began talking
  • onSpeechRecognized: Activated when the app determines that it could actually precisely transcribe the incoming speech information
  • onSpeechEnd: Triggered when somebody quits talking and there’s a second of silence
  • onSpeechError: Triggered when the speech recognition library throws an exception
  • onSpeechResults: Triggered when the speech recognition algorithm has completed transcribing and returned
  • onSpeechVolume: Triggers when the app detects a change within the quantity of the speaker

Getting began

To get began, you ought to be accustomed to React Native and its syntax. You’ll have to have a textual content editor like Chic Textual content or Atom put in in your laptop, and at last, you’ll want to put in the React Native CLI software.

After getting this stuff put in, you may start creating your transcription app. To create a brand new React Native challenge, you’ll first have to open your terminal and navigate to the listing the place you need your challenge to stay. Then, run the command react-native init to create a brand new React Native challenge.

As soon as your challenge has been created, open it in your textual content editor. Our transcription app would require just a few totally different elements. For one, we’ll want a part that renders the transcription textual content. We’ll additionally want a part that permits the consumer to enter audio, in addition to a part that converts the audio to textual content. After getting these elements coded, you may put all of them collectively to create your completed transcription app.

For speech-to-text conversion, we’ll use the Voice part equipped by the React Native Voice library, which comprises quite a few occasions that you need to use to begin or cease voice recognition and to acquire the outcomes of the voice recognition.

After we initialize the display, we set sure occasion callbacks within the constructor, as proven within the code pattern under. As you may see, now we have features for SpeechStart and SpeechEnd. Beneath are the callbacks that will probably be invoked mechanically when the occasion happens:

import { NativeModules, NativeEventEmitter, Platform } from 'react-native';
import invariant from 'invariant';
import {
  VoiceModule,
  SpeechEvents,
  SpeechRecognizedEvent,
  SpeechErrorEvent,
  SpeechResultsEvent,
  SpeechStartEvent,
  SpeechEndEvent,
  SpeechVolumeChangeEvent,
} from './VoiceModuleTypes';

const Voice = NativeModules.Voice as VoiceModule;

// NativeEventEmitter is barely availabe on React Native platforms, so this conditional is used to keep away from import conflicts within the browser/server
const voiceEmitter =
  Platform.OS !== 'net' ? new NativeEventEmitter(Voice) : null;
kind SpeechEvent = keyof SpeechEvents;

class RCTVoice {
  _loaded: boolean;
  _listeners: any[] | null;
  _events: Required<SpeechEvents>;

  constructor() {
    this._loaded = false;
    this._listeners = null;
    this._events = {
      onSpeechStart: () => {},
      onSpeechRecognized: () => {},
      onSpeechEnd: () => {},
      onSpeechError: () => {},
      onSpeechResults: () => {},
      onSpeechPartialResults: () => {},
      onSpeechVolumeChanged: () => {},
    };
  }

  removeAllListeners() {
    Voice.onSpeechStart = undefined;
    Voice.onSpeechRecognized = undefined;
    Voice.onSpeechEnd = undefined;
    Voice.onSpeechError = undefined;
    Voice.onSpeechResults = undefined;
    Voice.onSpeechPartialResults = undefined;
    Voice.onSpeechVolumeChanged = undefined;
  }

  destroy() {
    if (!this._loaded && !this._listeners) {
      return Promise.resolve();
    }
    return new Promise((resolve, reject) => {
      Voice.destroySpeech((error: string) => {
        if (error) {
          reject(new Error(error));
        } else {
          if (this._listeners) {
            this._listeners.map(listener => listener.take away());
            this._listeners = null;
          }
          resolve();
        }
      });
    });
  }

  begin(locale: any, choices = {}) {
    if (!this._loaded && !this._listeners && voiceEmitter !== null) {
      this._listeners = (Object.keys(this._events) as SpeechEvent[]).map(
        (key: SpeechEvent) => voiceEmitter.addListener(key, this._events[key]),
      );
    }

    return new Promise((resolve, reject) => {
      const callback = (error: string) => {
        if (error) {
          reject(new Error(error));
        } else {
          resolve();
        }
      };
      if (Platform.OS === 'android') {
        Voice.startSpeech(
          locale,
          Object.assign(
            {
              EXTRA_LANGUAGE_MODEL: 'LANGUAGE_MODEL_FREE_FORM',
              EXTRA_MAX_RESULTS: 5,
              EXTRA_PARTIAL_RESULTS: true,
              REQUEST_PERMISSIONS_AUTO: true,
            },
            choices,
          ),
          callback,
        );
      } else {
        Voice.startSpeech(locale, callback);
      }
    });
  }
  cease() {
    if (!this._loaded && !this._listeners) {
      return Promise.resolve();
    }
    return new Promise((resolve, reject) => {
      Voice.stopSpeech(error => {
        if (error) {
          reject(new Error(error));
        } else {
          resolve();
        }
      });
    });
  }
  cancel() {
    if (!this._loaded && !this._listeners) {
      return Promise.resolve();
    }
    return new Promise((resolve, reject) => {
      Voice.cancelSpeech(error => {
        if (error) {
          reject(new Error(error));
        } else {
          resolve();
        }
      });
    });
  }
  isAvailable(): Promise<0 | 1> {
    return new Promise((resolve, reject) => {
      Voice.isSpeechAvailable((isAvailable: 0 | 1, error: string) => {
        if (error) {
          reject(new Error(error));
        } else {
          resolve(isAvailable);
        }
      });
    });
  }

  /**
   * (Android) Get a listing of the speech recognition engines out there on the machine
   * */
  getSpeechRecognitionServices() {
    if (Platform.OS !== 'android') {
      invariant(
        Voice,
        'Speech recognition companies will be queried for less than on Android',
      );
      return;
    }

    return Voice.getSpeechRecognitionServices();
  }

  isRecognizing(): Promise<0 | 1> {
    return new Promise(resolve =>  1) => resolve(isRecognizing));
    );
  }

  set onSpeechStart(fn: (e: SpeechStartEvent) => void) {
    this._events.onSpeechStart = fn;
  }

  set onSpeechRecognized(fn: (e: SpeechRecognizedEvent) => void) {
    this._events.onSpeechRecognized = fn;
  }
  set onSpeechEnd(fn: (e: SpeechEndEvent) => void) {
    this._events.onSpeechEnd = fn;
  }
  set onSpeechError(fn: (e: SpeechErrorEvent) => void) {
    this._events.onSpeechError = fn;
  }
  set onSpeechResults(fn: (e: SpeechResultsEvent) => void) {
    this._events.onSpeechResults = fn;
  }
  set onSpeechPartialResults(fn: (e: SpeechResultsEvent) => void) {
    this._events.onSpeechPartialResults = fn;
  }
  set onSpeechVolumeChanged(fn: (e: SpeechVolumeChangeEvent) => void) {
    this._events.onSpeechVolumeChanged = fn;
  }
}

export {
  SpeechEndEvent,
  SpeechErrorEvent,
  SpeechEvents,
  SpeechStartEvent,
  SpeechRecognizedEvent,
  SpeechResultsEvent,
  SpeechVolumeChangeEvent,
};
export default new RCTVoice();

We use the callback occasions above to find out the standing of speech recognition. Now, let’s look at learn how to start, halt, cancel, and delete the voice recognition course of.

Begin the voice recognition methodology

If you press the begin button, the voice recognition methodology is launched. It’s an asynchronous methodology that merely tries to begin the voice recognition engine, logging an error to the console if it fails.

Now that we’re accustomed to the React Native Voice library, let’s transfer on to the code! On this instance, we’ll create a display with a microphone image because the clickable button. After clicking on the button, we’ll start voice recognition; with this course of, we are able to retrieve the standing of every part within the callback features. To halt the speech-to-text translation, we are able to use the cease, cancel, and destroy buttons.

We’ll get two varieties of outcomes throughout and after speech recognition. When the speech recognizer completes its recognition, the outcome will seem. The speech recognizer will acknowledge some phrases earlier than the ultimate outcome, subsequently, partial outcomes will seem in the course of the computation of outcomes. As a result of it’s an middleman final result, partial outcomes will be quite a few for a single recognition.

Constructing a React Native app

To create our React Native app, we’ll make the most of react-native init. Assuming you’ve Node.js put in in your machine, you may set up the React Native CLI command line utility with npm.

Go to the workspace, launch the terminal, and execute the next command:

npm set up -g react-native-cli

To launch a brand new React Native challenge, use the next command:

react-native init ProjectName

To start out a brand new challenge with a particular React Native model, use the —model parameter:

react-native init ProjectName --version X.XX.X
react-native init ProjectName --version [email protected]

The command above will create a challenge construction in your challenge listing with an index file titled App.js.

Dependency set up

It’s important to set up the react-native-voice dependency earlier than you need to use the Voice part. Open the terminal and navigate to your challenge to put in the requirement:

cd ProjectName & npm set up react-native-voice --save

iOS microphone and voice recognition permission

For React Native Voice to work, you will need to have permission to make use of the microphone, and React Native for iOS requires including keys to the Information.plist file. Observe the directions under to supply authorization to make the most of the microphone and voice recognition within the iOS challenge:

In Xcode, open the challenge TranscriptionExample -> ios -> yourprj.xcworkspace

After launching the challenge in Xcode, click on the challenge on the left sidebar to see varied choices in the fitting workspace. Select the data tab, which is data.plist.

Subsequent, create two permissions keys,Privateness-Microphone Utilization Description and Privateness-Speech Recognition Utilization Description. You too can make the worth seen when the permission dialog seems, as seen within the screenshot under:

Ios Button Microphone Permission Example

Changing speech to textual content

Now, open App.js in a useful part and change the prevailing code with the total code under:

bundle com.wenkesj.voice;

import android.Manifest;
import java.util.ArrayList;
import java.util.Record;
import java.util.Locale;

import javax.annotation.Nullable;

public class VoiceModule extends ReactContextBaseJavaModule implements RecognitionListener {

  ultimate ReactApplicationContext reactContext;
  personal SpeechRecognizer speech = null;
  personal boolean isRecognizing = false;
  personal String locale = null;

  public VoiceModule(ReactApplicationContext reactContext) {
    tremendous(reactContext);
    this.reactContext = reactContext;
  }

  personal String getLocale(String locale) {
    if (locale != null && !locale.equals("")) {
      return locale;
    }

    return Locale.getDefault().toString();
  }

  personal void startListening(ReadableMap opts) {
    if (speech != null) {
      speech.destroy();
      speech = null;
    }

    if(opts.hasKey("RECOGNIZER_ENGINE")) {
      change (opts.getString("RECOGNIZER_ENGINE")) {
        case "GOOGLE": {
          speech = SpeechRecognizer.createSpeechRecognizer(this.reactContext, ComponentName.unflattenFromString("com.google.android.googlequicksearchbox/com.google.android.voicesearch.serviceapi.GoogleRecognitionService"));
          break;
        }
        default:
          speech = SpeechRecognizer.createSpeechRecognizer(this.reactContext);
      }
    } else {
      speech = SpeechRecognizer.createSpeechRecognizer(this.reactContext);
    }

    speech.setRecognitionListener(this);

    ultimate Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);

    // Load the intent with choices from JS
    ReadableMapKeySetIterator iterator = opts.keySetIterator();
    whereas (iterator.hasNextKey()) {
      String key = iterator.nextKey();
      change (key) {
        case "EXTRA_LANGUAGE_MODEL":
          change (opts.getString(key)) {
            case "LANGUAGE_MODEL_FREE_FORM":
              intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
              break;
            case "LANGUAGE_MODEL_WEB_SEARCH":
              intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);
              break;
            default:
              intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
              break;
          }
          break;
        case "EXTRA_MAX_RESULTS": {
          Double extras = opts.getDouble(key);
          intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, extras.intValue());
          break;
        }
        case "EXTRA_PARTIAL_RESULTS": {
          intent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, opts.getBoolean(key));
          break;
        }
        case "EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS": {
          Double extras = opts.getDouble(key);
          intent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_MINIMUM_LENGTH_MILLIS, extras.intValue());
          break;
        }
        case "EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS": {
          Double extras = opts.getDouble(key);
          intent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS, extras.intValue());
          break;
        }
        case "EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS": {
          Double extras = opts.getDouble(key);
          intent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS, extras.intValue());
          break;
        }
      }
    }

    intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, getLocale(this.locale));
    speech.startListening(intent);
  }

  personal void startSpeechWithPermissions(ultimate String locale, ultimate ReadableMap opts, ultimate Callback callback) {
    this.locale = locale;

    Handler mainHandler = new Handler(this.reactContext.getMainLooper());
    mainHandler.submit(new Runnable() {
      @Override
      public void run() {
        attempt {
          startListening(opts);
          isRecognizing = true;
          callback.invoke(false);
        } catch (Exception e) {
          callback.invoke(e.getMessage());
        }
      }
    });
  }

  @Override
  public String getName() {
    return "RCTVoice";
  }

  @ReactMethod
  public void startSpeech(ultimate String locale, ultimate ReadableMap opts, ultimate Callback callback) {
    if (!isPermissionGranted() && opts.getBoolean("REQUEST_PERMISSIONS_AUTO")) {
      String[] PERMISSIONS = {Manifest.permission.RECORD_AUDIO};
      if (this.getCurrentActivity() != null) {
        ((PermissionAwareActivity) this.getCurrentActivity()).requestPermissions(PERMISSIONS, 1, new PermissionListener() {
          public boolean onRequestPermissionsResult(ultimate int requestCode,
                                                    @NonNull ultimate String[] permissions,
                                                    @NonNull ultimate int[] grantResults) {
            boolean permissionsGranted = true;
            for (int i = 0; i < permissions.size; i++) {
              ultimate boolean granted = grantResults[i] == PackageManager.PERMISSION_GRANTED;
              permissionsGranted = permissionsGranted && granted;
            }
            startSpeechWithPermissions(locale, opts, callback);
            return permissionsGranted;
          }
        });
      }
      return;
    }
    startSpeechWithPermissions(locale, opts, callback);
  }

  @ReactMethod
  public void stopSpeech(ultimate Callback callback) {
    Handler mainHandler = new Handler(this.reactContext.getMainLooper());
    mainHandler.submit(new Runnable() {
      @Override
      public void run() {
        attempt {
          if (speech != null) {
            speech.stopListening();
          }
          isRecognizing = false;
          callback.invoke(false);
        } catch(Exception e) {
          callback.invoke(e.getMessage());
        }
      }
    });
  }

  @ReactMethod
  public void cancelSpeech(ultimate Callback callback) {
    Handler mainHandler = new Handler(this.reactContext.getMainLooper());
    mainHandler.submit(new Runnable() {
      @Override
      public void run() {
        attempt {
          if (speech != null) {
            speech.cancel();
          }
          isRecognizing = false;
          callback.invoke(false);
        } catch(Exception e) {
          callback.invoke(e.getMessage());
        }
      }
    });
  }

  @ReactMethod
  public void destroySpeech(ultimate Callback callback) {
    Handler mainHandler = new Handler(this.reactContext.getMainLooper());
    mainHandler.submit(new Runnable() {
      @Override
      public void run() {
        attempt {
          if (speech != null) {
            speech.destroy();
          }
          speech = null;
          isRecognizing = false;
          callback.invoke(false);
        } catch(Exception e) {
          callback.invoke(e.getMessage());
        }
      }
    });
  }

  @ReactMethod
  public void isSpeechAvailable(ultimate Callback callback) {
    ultimate VoiceModule self = this;
    Handler mainHandler = new Handler(this.reactContext.getMainLooper());
    mainHandler.submit(new Runnable() {
      @Override
      public void run() {
        attempt {
          Boolean isSpeechAvailable = SpeechRecognizer.isRecognitionAvailable(self.reactContext);
          callback.invoke(isSpeechAvailable, false);
        } catch(Exception e) {
          callback.invoke(false, e.getMessage());
        }
      }
    });
  }

  @ReactMethod
  public void getSpeechRecognitionServices(Promise promise) {
    ultimate Record<ResolveInfo> companies = this.reactContext.getPackageManager()
        .queryIntentServices(new Intent(RecognitionService.SERVICE_INTERFACE), 0);
    WritableArray serviceNames = Arguments.createArray();
    for (ResolveInfo service : companies) {
      serviceNames.pushString(service.serviceInfo.packageName);
    }

    promise.resolve(serviceNames);
  }

  personal boolean isPermissionGranted() {
    String permission = Manifest.permission.RECORD_AUDIO;
    int res = getReactApplicationContext().checkCallingOrSelfPermission(permission);
    return res == PackageManager.PERMISSION_GRANTED;
  }

  @ReactMethod
  public void isRecognizing(Callback callback) {
    callback.invoke(isRecognizing);
  }

  personal void sendEvent(String eventName, @Nullable WritableMap params) {
    this.reactContext
      .getJSModule(DeviceEventManagerModule.RCTDeviceEventEmitter.class)
      .emit(eventName, params);
  }

  @Override
  public void onBeginningOfSpeech() {
    WritableMap occasion = Arguments.createMap();
    occasion.putBoolean("error", false);
    sendEvent("onSpeechStart", occasion);
    Log.d("ASR", "onBeginningOfSpeech()");
  }

  @Override
  public void onBufferReceived(byte[] buffer) {
    WritableMap occasion = Arguments.createMap();
    occasion.putBoolean("error", false);
    sendEvent("onSpeechRecognized", occasion);
    Log.d("ASR", "onBufferReceived()");
  }

  @Override
  public void onEndOfSpeech() {
    WritableMap occasion = Arguments.createMap();
    occasion.putBoolean("error", false);
    sendEvent("onSpeechEnd", occasion);
    Log.d("ASR", "onEndOfSpeech()");
    isRecognizing = false;
  }

  @Override
  public void onError(int errorCode) {
    String errorMessage = String.format("%d/%s", errorCode, getErrorText(errorCode));
    WritableMap error = Arguments.createMap();
    error.putString("message", errorMessage);
    error.putString("code", String.valueOf(errorCode));
    WritableMap occasion = Arguments.createMap();
    occasion.putMap("error", error);
    sendEvent("onSpeechError", occasion);
    Log.d("ASR", "onError() - " + errorMessage);
  }

  @Override
  public void onEvent(int arg0, Bundle arg1) { }

  @Override
  public void onPartialResults(Bundle outcomes) {
    WritableArray arr = Arguments.createArray();

    ArrayList<String> matches = outcomes.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
    for (String outcome : matches) {
      arr.pushString(outcome);
    }

    WritableMap occasion = Arguments.createMap();
    occasion.putArray("worth", arr);
    sendEvent("onSpeechPartialResults", occasion);
    Log.d("ASR", "onPartialResults()");
  }

  @Override
  public void onReadyForSpeech(Bundle arg0) {
    WritableMap occasion = Arguments.createMap();
    occasion.putBoolean("error", false);
    sendEvent("onSpeechStart", occasion);
    Log.d("ASR", "onReadyForSpeech()");
  }

  @Override
  public void onResults(Bundle outcomes) {
    WritableArray arr = Arguments.createArray();

    ArrayList<String> matches = outcomes.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
    for (String outcome : matches) {
      arr.pushString(outcome);
    }

    WritableMap occasion = Arguments.createMap();
    occasion.putArray("worth", arr);
    sendEvent("onSpeechResults", occasion);
    Log.d("ASR", "onResults()");
  }

  @Override
  public void onRmsChanged(float rmsdB) {
    WritableMap occasion = Arguments.createMap();
    occasion.putDouble("worth", (double) rmsdB);
    sendEvent("onSpeechVolumeChanged", occasion);
  }

  public static String getErrorText(int errorCode) {
    String message;
    change (errorCode) {
      case SpeechRecognizer.ERROR_AUDIO:
        message = "Audio recording error";
        break;
      case SpeechRecognizer.ERROR_CLIENT:
        message = "Shopper facet error";
        break;
      case SpeechRecognizer.ERROR_INSUFFICIENT_PERMISSIONS:
        message = "Inadequate permissions";
        break;
      case SpeechRecognizer.ERROR_NETWORK:
        message = "Community error";
        break;
      case SpeechRecognizer.ERROR_NETWORK_TIMEOUT:
        message = "Community timeout";
        break;
      case SpeechRecognizer.ERROR_NO_MATCH:
        message = "No match";
        break;
      case SpeechRecognizer.ERROR_RECOGNIZER_BUSY:
        message = "RecognitionService busy";
        break;
      case SpeechRecognizer.ERROR_SERVER:
        message = "error from server";
        break;
      case SpeechRecognizer.ERROR_SPEECH_TIMEOUT:
        message = "No speech enter";
        break;
      default:
        message = "Did not perceive, please attempt once more.";
        break;
    }
    return message;
  }
}

Begin the React Native app

Reopen the terminal and use the command under to go to your challenge:

cd ProjectName

To execute the challenge on an Android digital machine or an actual debugging machine, use the next command:


Extra nice articles from LogRocket:


react-native run-android 

For the iOS Simulator on macOS solely, use the command under:

run-ios with react-native

Conclusion

You’ll find the whole code for this challenge at this GitHub repository. I extracted and modified the sections that had been used for transcription and text-to-speech capabilities.

It’s unbelievable to see how far voice recognition has progressed and the way easy it’s to combine it into our functions with little to no theoretical transcribing expertise. I’d strongly suggest adopting this performance if you wish to use voice recognition in your utility however lack the talents or time to design a novel mannequin.

You too can construct on the data supplied on this tutorial so as to add further options to your transcription app. For instance, you may permit customers to look by their transcribed texts for particular key phrases or phrases. You might embrace a sharing characteristic in order that customers can share their transcriptions with others, or lastly, you may present a method for customers to export their transcriptions into different codecs, like PDFs or Phrase paperwork.

I hope this text was useful. Please you should definitely depart a remark in case you have any questions or points. Glad coding!

LogRocket: Immediately recreate points in your React Native apps.

LogRocket is a React Native monitoring resolution that helps you reproduce points immediately, prioritize bugs, and perceive efficiency in your React Native apps.

LogRocket additionally helps you improve conversion charges and product utilization by exhibiting you precisely how customers are interacting together with your app. LogRocket’s product analytics options floor the explanation why customers do not full a selected movement or do not undertake a brand new characteristic.

Begin proactively monitoring your React Native apps — .

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments