Saturday, April 29, 2023
HomeProgrammingThe Client API

The Client API


We’ll proceed the identical the place we left within the earlier chapter. On this chapter, we’ll concentrate on consuming the identical Notification messages into our software. As mentioned in Chapter 2, a Client is an software that reads the information from the Kafka subjects by subscribing to it.

The Shoppers preserve the connectivity to Kafka cluster utilizing the idea of heartbeat. This heartbeat permits the Zookeeper or Dealer coordinator to remember that the buyer is consistently related to the cluster or not. On the absence of heartbeat, the dealer coordinator would know that the buyer is now not related and must re-balance the load amongst different shoppers.

As mentioned in Chapter 2, the shoppers are additionally grouped into Client Teams in order that it may well share the partitions of the subjects they subscribe to. We had already mentioned the completely different methods that may very well be tailored based mostly upon numerous use-cases in Chapter 2. So let’s strive the sensible implementation of the identical right here.

Kafka Client Implementation

Let’s create a easy maven based mostly Java software in any our favourite IDE. We’ll add the next libraries as a part of the preliminary POM:

    <dependencies>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.13</artifactId>
            <model>3.1.0</model>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <model>3.1.0</model>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <model>1.7.36</model>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <model>1.7.36</model>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
            <model>2.13.1</model>
        </dependency>
        <dependency>
            <groupId>com.github.javafaker</groupId>
            <artifactId>javafaker</artifactId>
            <model>1.0.2</model>
        </dependency>
    </dependencies>

We’re utilizing the identical set of libraries that we had used whereas implementing the Producer API. We will make use of the identical Notification object that now we have outlined earlier however we’ll add one further methodology to transform the incoming JSON string to our Notification object.

public class Notification {

    personal ultimate String date;
    personal ultimate String orderId;
    personal ultimate String message;
    personal ultimate String userId;
    personal ultimate String title;
    personal ultimate String e-mail;

    public Notification(String date, String orderId, String message, String userId, String title, String e-mail) {
        this.date = date;
        this.orderId = orderId;
        this.message = message;
        this.userId = userId;
        this.title = title;
        this.e-mail = e-mail;
    }

    public Notification() {
        date = null;
        orderId = null;
        message = null;
        userId = null;
        title = null;
        e-mail = null;
    }

    ...

    public static Notification fromJson(ultimate String json) throws JsonProcessingException {
        ObjectMapper mapper = new ObjectMapper();
        return mapper.readValue(json, Notification.class);
    }
}

Subsequent, we will have a look into few of the superior configuration that may be set within the shopper facet. We will attempt to make an enterprise grade shopper microservice to help Steve and Jane’s e-commerce software.

Superior Kafka Client Configuration

We will add few superior configurations to make it possible for we enhance the throughput and the effectivity of our software end-to-end. With a view to obtain that we’ll primarily concentrate on three components:

  • Client Group
  • Customized Deserializer
  • Managing Offsets

Client Group

Client teams kind a single logical subscriber made out of a number of shoppers. Thus, Kafka shopper gives a setting, group.id the place the shoppers are grouped based mostly upon situations. The patron pool divides the work of consuming and processing the data. If a shopper inside a gaggle fails, then it has functionality to rebalance and alter the consumption of messages amongst them as soon as the brand new shopper is added. This re-assignment of partitions among the many shoppers are taken care by Kafka routinely.

props.put(ConsumerConfig.GROUP_ID_CONFIG, "KafkaNotificationConsumer");

Customized Deserializer

The method of changing bytes of information into object is known as deserialization. Principally, this course of transforms the content material right into a readable and interpretable data. Kafka has by default designed some pre-built deserializer identical as serializers based mostly upon numerous knowledge sorts:

  • StringDeserializer
  • ShortDeserializer
  • IntegerDeserializer
  • LongDeserializer
  • DoubleDeserializer
  • BytesDeserializer

It additionally gives functionality to implement customized deserializer in the identical means that we did in our earlier chapter for serializers utilizing the Deserializer interface.

Let’s implement our customized serializer utilizing the above interface and title it as NotificationDeserializer:

public class NotificationDeserializer implements Deserializer<Notification> {

    personal static ultimate Logger logger = LoggerFactory.getLogger(NotificationDeserializer.class);

    @Override
    public Notification deserialize(String subject, byte[] knowledge) {
        strive {
            if (Objects.isNull(knowledge)) {
                logger.error("Acquired null whereas deserializing byte[]");
                return null;
            }
            return Notification.fromJson(new String(knowledge, StandardCharsets.UTF_8));
        } catch (Exception e) {
            throw new SerializationException("Error when deserializing byte[] to Notification");
        }
    }

    @Override
    public void configure(Map<String, ?> configs, boolean isKey) {
    }

    @Override
    public void shut() {
    }
}

We will refer this desrializer to the Kafka properties within the following means:

props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, NotificationDeserializer.class.getName());

Managing Offsets

An Offset is a place inside a partition for the following meesage to be despatched to the buyer. Kafka maintains two forms of shoppers:

  • Present offset: It’s a pointer to the final file that Kafka has already despatched to the buyer over the last ballot of data. Kafka maintains this place in order that the buyer doesn’t obtain the identical file twice.
  • Dedicated offset: It’s a pointer to the final file {that a} shopper has sucessfully processed. Kafka makes use of this to keep away from resending the identical data to new shopper in the course of the occasion of partition rebalance.

Kafka gives the property allow.auto.commit to carry out auto-commit. We will flip it off by setting it false. Kafka additionally gives the property auto.commit.interval.ms which is about by default to 5 seconds. So it commits your present offset each 5 seconds. We will set this worth as per our want.

As soon as the buyer receives its task from the coordinator, it should decide the preliminary place for every assigned partition. When the group is first created proper earlier than any of the messages have been began consuming, we will set the place as earliest or newest offset. Kafka makes use of auto.offset.reset property to move this worth.

Lastly, we will mix all this properties and create a Client occasion:

personal static Client<String, Notification> createConsumer() {
    ultimate Properties props = new Properties();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, ApplicationConstants.BOOTSTRAP_SERVERS);
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "KafkaNotificationConsumer");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());

    //Customized Deserializer
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, NotificationDeserializer.class.getName());
    props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 100);

    //Auto commit config
    props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
    props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "5000");
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");

    // Create the buyer utilizing props.
    ultimate Client<String, Notification> shopper = new KafkaConsumer<>(props);

    // Subscribe to the subject.
    shopper.subscribe(Collections.singletonList(ApplicationConstants.TOPIC));
    return shopper;
 }

Kafka Client is definitely not thread protected. The appliance want to take care of its personal thread and all of the community I/O takes place in the identical thread of the appliance. So let’s simply create our methodology to run shopper and show the statistics and the content material of the data.

public class KafkaConsumerAPI {

    personal static ultimate Logger logger = LoggerFactory.getLogger(KafkaConsumerAPI.class);

    ...

    static void runConsumer() {
        strive (Client<String, Notification> shopper = createConsumer()) {
            ultimate Map<String, Notification> map = new HashMap<>();
            ultimate int giveUp = 1000;
            int noRecordsCount = 0;
            int readCount = 0;
            whereas (true) {
                ultimate ConsumerRecords<String, Notification> consumerRecords = shopper.ballot(1000);
                if (consumerRecords.depend() == 0) {
                    noRecordsCount++;
                    if (noRecordsCount > giveUp) break;
                    else proceed;
                }
                readCount++;
                consumerRecords.forEach(file -> map.put(file.key(), file.worth()));
                if (readCount % 100 == 0) {
                    displayRecordsStatsAndNotification(map, consumerRecords);
                }
                shopper.commitAsync();
            }
        }
        logger.information("Executed Client Processing");
        logger.information("========================================================================================");
    }

    personal static void displayRecordsStatsAndNotification(ultimate Map<String, Notification> notificationMap,
            ultimate ConsumerRecords<String, Notification> consumerRecords) {
        logger.information("New ConsumerRecords par depend {} depend {} %n",
                consumerRecords.partitions().dimension(),
                consumerRecords.depend());
        notificationMap.forEach((s, notification) ->
                logger.information(String.format("Notification content material: %s %n", notification)));
    }


    public static void primary(String... args) {
        logger.information("========================================================================================");
        logger.information("Beginning Kafka Client Course of");
        runConsumer();
    }
}

This completes our fundamental implementation to eat the messages from Kafka. Our micro-service would be capable to eat this notifications effectively and ahead it to downstream functions as and when required. Now if we run the appliance then we will see the logs being printed within the console. Few generic logs seems to be like under:

Notification Content material:

22/02/24 14:31:24 INFO stackabuse.KafkaConsumerAPI: New ConsumerRecords par depend 1 depend 16 %n
22/02/24 14:31:24 INFO stackabuse.KafkaConsumerAPI: Notification content material: Notification{date=Tue Feb 22 18:29:41 IST 2022, orderId=226-62-7704, message="Your order has been processed and shipped!!", userId=835-57-6345, title="Gulian Swann", e-mail="dedra.lueilwitz@instance.com"} 

22/02/24 14:31:24 INFO stackabuse.KafkaConsumerAPI: Notification content material: Notification{date=Solar Feb 20 23:48:11 IST 2022, orderId=120-79-0038, message="Your order has been processed and shipped!!", userId=652-57-5121, title="Eustace", e-mail="valene.oberbrunner@instance.com"} 

22/02/24 14:31:24 INFO stackabuse.KafkaConsumerAPI: Notification content material: Notification{date=Solar Feb 20 06:52:56 IST 2022, orderId=631-50-9411, message="Your order has been processed and shipped!!", userId=096-01-6826, title="Tristimun", e-mail="peter.schulist@instance.com"} 

22/02/24 14:31:24 INFO stackabuse.KafkaConsumerAPI: Notification content material: Notification{date=Solar Feb 20 15:14:32 IST 2022, orderId=630-14-7277, message="Your order has been processed and shipped!!", userId=451-10-8489, title="Gawen Wylde", e-mail="verlie.goyette@instance.com"} 

22/02/24 14:31:24 INFO stackabuse.KafkaConsumerAPI: Notification content material: Notification{date=Solar Feb 20 22:33:37 IST 2022, orderId=703-84-0517, message="Your order has been processed and shipped!!", userId=584-39-0246, title="Cuger", e-mail="bennie.harber@instance.com"} 

All of the code carried out as a part of this chapter could be present in GitHub. We’ll have a look into the Kafka Streams API within the subsequent chapter.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments