Monday, May 1, 2023
HomeProgrammingGuided Challenge: Information Pipeline in Spring Boot utilizing Kafka

Guided Challenge: Information Pipeline in Spring Boot utilizing Kafka


The notion of a information pipeline isn’t new, and lots of companies have utilized them for years (generally with out their data), however in several configurations than those we see at this time. Information pipelines are important for companies to have, although, because the enlargement of firm information continues to be exponential 12 months after 12 months.

Information pipelines are important within the space of information analytics and enterprise evaluation, however in addition they provide many benefits and purposes outdoors of enterprise intelligence. As we speak, we’ll speak about some great benefits of information pipelines, what an information pipeline includes, and provides a high-level technical breakdown of its important parts.

An information pipeline, to place merely, is a set of actions that switch unprocessed, structured or unstructured information from a supply to a vacation spot. There will be varied varieties of supply, however on this context a supply could be a transactional database, whereas the standard vacation spot could be a information lake or an information warehouse. The final location is the place the info is examined for industrial insights. On this entire journey from a supply to vacation spot, varied varieties of transformation logic is utilized to the extracted information at varied ranges to make it prepared for fast evaluation and presentable or visualizable within the Consumer Interface.

On this guided undertaking, we’ll construct an information pipeline utilizing Spring Boot and Apache Kafka on its core. We’ll emit messages into Kafka after which devour it to retailer in several types of datastores. First, we are going to attempt mostly used SQL database as vacation spot. Then we will even course of the info into No-SQL database like Apache Cassandra. Lastly, we are going to carry out information transformation from easy datatype to Avro format and vice-versa utilizing Confluent Schema Registry.

So, let’s put on the hat of a plumber and design our pipeline that might lead us to our information lake to stream occasions of knowledge.

What’s a Information Pipeline?

An information pipeline usually has three primary parts: a information supply, processing levels, and a vacation spot or sink on the finish. Customers have the flexibility to maneuver information between sources and locations whereas additionally ensuring adjustments to it through pipelines. Pipelines may share a supply and vacation spot, with the info merely being reworked to fulfill the wants.

Nevertheless, lately, the selection, quantity, and velocity of knowledge have modified considerably and elevated in complexity. Pipelines should thus be able to dealing with the Large Information calls for of the vast majority of enterprises. Companies should take nice care to make sure that their information pipelines do not lose any data and are very correct since a big quantity of knowledge might current probabilities for actions like real-time reporting and predictive analytics.

These pipelines are designed to deal with a very powerful traits of massive information: velocity, quantity, and selection. They need to have the ability to handle streaming information given the pace at which the info is created at supply and extracted. They need to additionally course of this information in real-time.

For the reason that quantity of the generated information may change over time, these pipelines have to be scalable sufficient. All varieties of information, whether or not organized, semi-structured, or unstructured, ought to be supported by these pipelines as nicely.

Normally there are 4 varieties of information pipelines that may be designed:

  • Batch: Companies use batch pipelines when they should transport massive quantities of knowledge. The vast majority of the time, batch processing jobs are executed on a predetermined schedule or, in sure conditions, when the quantity of knowledge hits a predetermined threshold.
  • Actual-time: This pipeline is designed to devour and course of the info as quickly as it’s produced or generated on the supply or in real-time. If we will course of information from any streaming supply akin to monetary market evaluation data or occasions from sensors or IoT in real-time, it provides lot of values to enterprise.
  • Open-Supply: These pipelines are deemed to be acceptable for firms on the lookout for a inexpensive various to industrial pipelines or need to design a pipeline particularly to fulfill their specific enterprise and information necessities in silos.
  • Software program-as-a-Service(SaaS): Solely cloud-based information sources, locations, or each are suitable with these pipelines. As a result of these pipelines are natively hosted within the cloud platforms like AWS, Azure or GCP, organizations might spend much less on infrastructure and specialised workers.

Elements of a Information Pipeline

Let’s look at what a pipeline usually contains of with a view to comprehend how the info pipeline capabilities typically:

  • Supply: The supply is the purpose at which all information enters the pipeline. Most pipelines originate from storage programs like Information Warehouses, Information Lakes, and so forth. or transactional processing purposes, software APIs, IoT machine sensors, and so forth.
  • Vacation spot: The final location to which information is shipped is the last word vacation spot. For majority of use instances, the vacation spot is an information warehouse, information lake, or information evaluation and enterprise intelligence software.
  • Dataflow: This entails the transmission of knowledge between the sources and locations in addition to any modifications which can be made to it. ETL(Extract, Remodel, Load) is likely one of the most used strategies for dataflow.
  • Storage: As information strikes by way of the pipeline, all applied sciences used to keep up it at varied levels are known as storage.
  • Processing: The method of absorbing information from sources, storing it, altering it, and feeding it into the vacation spot is known as processing. Though information processing is expounded to the info circulation, this step’s emphasis is on the info circulation’s execution.
  • Workflow: A collection of processes are outlined by a workflow, together with how they relate to 1 one other in a pipeline.
  • Monitoring: Monitoring is completed to verify the pipeline and every of its levels are functioning correctly and finishing up the required duties.

Dataflow utilizing ETL pipeline

Extract, Remodel, and Load describes the set of processes which extracts information from one system, transforms it, after which hundreds it right into a goal system.

  • Extract: The method of buying all vital information from the supply programs is called extraction. These sources will be databases like MySQL, PostgreSQL, MongoDB, Cassandra, and so forth., instruments for Buyer Relationship Administration(CRM), Enterprise Useful resource Planning(ERP), and so forth. for almost all of ETL procedures.
  • Remodel: The method of remodeling information in order that it could be simply comprehended by a enterprise intelligence or information evaluation software will be known as transformation. Sometimes, the next actions are carried out throughout this stage:
    • Cleansing, verifying, and authenticating the info after filtering and de-duplicating it.
    • Making use of any required translations, computations, or summarizations to the retrieved uncooked information.
    • Encrypting, deleting, or obscuring materials that’s topic to authorized or regulatory necessities.
    • Remodeling information into tables and finishing up the required joins to evolve to the Schema of the goal Information Warehouse.
  • Load: The method of storing the transformed information within the desired location, usually an information warehouse or a minimal datastore is known as loading.

Notice: The ETL process works nicely for tiny information units that want intricate modifications. The ELT(Extract, Load and Remodel) process is extra appropriate for larger, unstructured information collections and conditions the place pace is essential.

Since, we acquired some good concept about information pipeline, now let’s attempt to construct some frequent pipelines and ingest information into varied varieties of datastores.

Extract from Kafka and Auto-Load in PostgreSQL Database

In our earlier chapters, we’ve got pushed deep into the ideas and utilization of Apache Kafka. On this guided-project, we are going to use the identical ideas of Kafka and construct information pipelines. So, in our first instance, we are going to attempt to ingest information from Kafka and retailer it in PostgreSQL database.

We’ll attempt to extract the messages from completely different sources which can be being pushed into Kafka matter and outline client strategies that may auto-ingest it in PostgreSQL the second it’s revealed within the matter. Principally, we are going to attempt the idea of Extract and Load pipeline on this instance.

Now let’s begin with our implementation. So initially, let’s initialize the Spring Boot software utilizing Spring Initializr:

Spring Initializr Reactive Kafka

Now we have added the Spring Information JPA and PostgreSQL Driver for Spring Information connection. Then we’ve got additionally added Spring Internet, Spring for Apache Kafka so as to add libraries or modules for Kafka and Lombok.

For this instance, we are going to play with some Inventory Market information and attempt to construct a monetary information analytics system. Take into account that we get some steady streaming information from Inventory Alternate and that must be analyzed and proven in your web site in order that Customers can take a choice and purchase or promote their shares. So , let’s first configure and make our system succesful sufficient to deal with the connections with Kafka.

We’ll first outline some easy properties for Kafka in software.yml:

spring:
  kafka:
    producer:
      bootstrap-servers: localhost:9092
    client:
      bootstrap-servers: localhost:9092
      group: group_stock

This may present us the bootstrap server and consumer-group data for Kafka. Now, let’s outline the KafkaProducerConfiguration that may configure the Kafka producer config:

@Configuration
public class KafkaProducerConfiguration {

    @Worth(worth = "${spring.kafka.producer.bootstrap-servers}")
    personal String bootstrapServer;

    @Bean
    public ProducerFactory<String, Inventory> stockProducerFactory() {
        Map<String, Object> config = new HashMap<>();
        config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
        config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);

        return new DefaultKafkaProducerFactory<>(config);
    }

    @Bean
    public ProducerFactory<String, StockProfile> stockProfileProducerFactory() {
        Map<String, Object> config = new HashMap<>();
        config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
        config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);

        return new DefaultKafkaProducerFactory<>(config);
    }

    @Bean
    public KafkaTemplate<String, Inventory> stockKafkaTemplate() {
        return new KafkaTemplate<>(stockProducerFactory());
    }

    @Bean
    public KafkaTemplate<String, StockProfile> stockProfileKafkaTemplate() {
        return new KafkaTemplate<>(stockProfileProducerFactory());
    }
}

Since, we are going to course of two varieties of information, one could have the present Inventory particulars and the opposite could have the Inventory Profile of the corporate for which the shares are registered. Thus, we are going to instantiate two ProducerFactory situations for every of our entity.

Subsequent, we are going to outline the KafkaConsumerConfiguration to outline Kafka client config:

@EnableKafka
@Configuration
public class KafkaConsumerConfiguration {

    @Worth(worth = "${spring.kafka.client.bootstrap-servers}")
    personal String bootstrapServer;

    @Worth(worth = "${spring.kafka.client.group}")
    personal String group;

    @ConditionalOnMissingBean(ConsumerFactory.class)
    public ConsumerFactory<String, String> stockConsumerFactory() {
        Map<String, Object> config = new HashMap<>();
        config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
        config.put(ConsumerConfig.GROUP_ID_CONFIG, group);
        config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
        return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(),
                new JsonDeserializer<>());
    }

    @ConditionalOnMissingBean(ConsumerFactory.class)
    public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> manufacturing unit =
                new ConcurrentKafkaListenerContainerFactory<>();
        manufacturing unit.setConsumerFactory(stockConsumerFactory());
        return manufacturing unit;
    }
}

After we’re carried out with our configurations, let’s create our entities. We’ll create two entities that will even outline our desk schema.

Inventory:

@AllArgsConstructor
@NoArgsConstructor
@ToString
@Builder
@Getter
@Setter
@Entity
@Desk(identify="inventory")
public class Inventory {

    @Id
    personal String id;
    personal String date;
    personal String firm;
    personal Float open;
    personal Float shut;
    personal Float excessive;
    personal Float low;
    personal Integer quantity;
}

StockProfile:

@AllArgsConstructor
@NoArgsConstructor
@ToString
@Builder
@Getter
@Setter
@Entity
@Desk(identify="stock_profile")
public class StockProfile {

    @Id
    personal String id;
    personal String firm;
    personal String career;
    personal String sector;
    personal String deal with;
    personal String registration;
}

Subsequent, we are going to create the repositories extending JpaRepository interface. This interface has all the bottom implementation to deal with CRUD operations for many of the generally used SQL databases.

public interface StockPostgresRepository extends JpaRepository<Inventory, Lengthy> {
}
public interface StockProfilePostgresRepository extends JpaRepository<StockProfile, String> {
}

This concludes our configuration and setup for our software. Now, we are going to begin our srevice-level implementation. We’ll outline a Faux service that may generate random shares and inventory profiles information.

@Service
public class StocksFakerService {

    public Listing<Inventory> getRandomStocks() throws ParseException {

        Faker faker = new Faker();
        Listing<Inventory> stockCollection = new ArrayList<>();

        last SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
        sdf.setTimeZone(TimeZone.getTimeZone("UTC"));   // This line converts the given date into UTC time zone
        last Date dateObjFrom = sdf.parse("2001-01-01T01:37:56");
        last Date dateObjTo = sdf.parse("2020-03-31T01:37:56");

        for (int i = 0; i < 500; i++) {
            Inventory inventory = Inventory.builder()
                    .id(UUID.randomUUID().toString())
                    .date(faker.date().between(dateObjFrom, dateObjTo).toString())
                    .firm(faker.firm().identify())
                    .open(Float.parseFloat(faker.commerce().value(20, 1000)))
                    .shut(Float.parseFloat(faker.commerce().value(500, 1000)))
                    .excessive(Float.parseFloat(faker.commerce().value(800, 1000)))
                    .low(Float.parseFloat(faker.commerce().value(5, 200)))
                    .quantity(faker.random().nextInt(100, 1000000))
                    .construct();
            stockCollection.add(inventory);
        }

        return stockCollection;
    }

    public Listing<StockProfile> getRandomStockProfiles() throws ParseException {

        Faker faker = new Faker();
        Listing<StockProfile> stockProfileCollection = new ArrayList<>();

        last SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
        sdf.setTimeZone(TimeZone.getTimeZone("UTC"));   // This line converts the given date into UTC time zone
        last Date dateObjFrom = sdf.parse("2000-01-01T01:37:56");
        last Date dateObjTo = sdf.parse("2000-12-31T01:37:56");

        for (int i = 0; i < 100; i++) {
            StockProfile stockProfile = StockProfile.builder()
                    .id(UUID.randomUUID().toString())
                    .firm(faker.firm().identify())
                    .career(faker.firm().career())
                    .sector(faker.firm().trade())
                    .deal with(faker.deal with().fullAddress())
                    .registration(faker.date().between(dateObjFrom, dateObjTo).toString())
                    .construct();
            stockProfileCollection.add(stockProfile);
        }

        return stockProfileCollection;
    }
}

Now we are going to outline a producer service to provide this randomly generated information into our Kafka matter:

@Slf4j
@Service
public class ProducerService {

    personal static last String STOCKS_TOPIC = "shares";
    personal static last String STOCKS_PROFILES_TOPIC = "stocks_profiles";

    @Autowired
    StocksFakerService stocksFakerService;

    @Autowired
    personal KafkaTemplate<String, Inventory> kafkaStockTemplate;

    @Autowired
    personal KafkaTemplate<String, StockProfile> kafkaStockProfileTemplate;

    @PostConstruct
    @Transactional
    public void sendMessage() throws ParseException {
        Listing<Inventory> shares = stocksFakerService.getRandomStocks();
        Listing<StockProfile> stocksProfiles = stocksFakerService.getRandomStockProfiles();
        for (Inventory inventory: shares) {
            log.information("$$ -> Producing shares message --> {}", inventory);
            kafkaStockTemplate.ship(STOCKS_TOPIC, inventory);
        }
        for (StockProfile stockProfile: stocksProfiles) {
            log.information("$$ -> Producing shares profile message --> {}", stockProfile);
            kafkaStockProfileTemplate.ship(STOCKS_PROFILES_TOPIC, stockProfile);
        }
    }
}

This may publish our information into Kafka and make it prepared for consumption. Subsequent, we are going to devour this information as quickly as its revealed and retailer it in PostgreSQL database:

@Slf4j
@Service
public class ConsumerService {

    @Autowired
    StockPostgresRepository stockPostgresRepo;

    @Autowired
    StockProfilePostgresRepository stockProfilePostgresRepo;

    @KafkaListener(matters = "shares",
            groupId = "${spring.kafka.client.group}",
            autoStartup = "true",
            containerFactory = "kafkaListenerContainerFactory")
    public void consumeStockJson(String message) throws JsonProcessingException {
        ObjectMapper mapper = new ObjectMapper();
        JsonNode json = mapper.readTree(message);
        Inventory inventory = new Inventory();
        inventory.setId(json.findValue("id").textValue());
        inventory.setDate(json.findValue("date").textValue());
        inventory.setCompany(json.findValue("firm").textValue());
        inventory.setOpen(json.findValue("open").floatValue());
        inventory.setClose(json.findValue("shut").floatValue());
        inventory.setHigh(json.findValue("excessive").floatValue());
        inventory.setLow(json.findValue("low").floatValue());
        inventory.setVolume(json.findValue("quantity").intValue());
        stockPostgresRepo.save(inventory);
        log.information("Consumed Inventory message: " + inventory);
    }

    @KafkaListener(matters = "stocks_profiles",
            groupId = "${spring.kafka.client.group}",
            autoStartup = "true",
            containerFactory = "kafkaListenerContainerFactory")
    public void consumeStockProfileJson(String message) throws JsonProcessingException {
        ObjectMapper mapper = new ObjectMapper();
        JsonNode json = mapper.readTree(message);
        StockProfile stockProfile = new StockProfile();
        stockProfile.setId(json.findValue("id").textValue());
        stockProfile.setCompany(json.findValue("firm").textValue());
        stockProfile.setProfession(json.findValue("career").textValue());
        stockProfile.setSector(json.findValue("sector").textValue());
        stockProfile.setAddress(json.findValue("deal with").textValue());
        stockProfile.setRegistration(json.findValue("registration").textValue());
        stockProfilePostgresRepo.save(stockProfile);
        log.information("Consumed Inventory Profile message: " + stockProfile);
    }
}

We will merely host a PostgreSQL occasion utilizing docker rapidly for our testing function:

docker run -itd -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=password -p 5432:5432 -v /information:/var/lib/postgresql/information --name postgresql postgres

Lastly, we are going to replace our software.yml with JPA configurations to hook up with PostgreSQL DB:

spring:
  jpa:
    show-sql: true
    hibernate:
      ddl-auto: replace
  datasource:
    url: jdbc:postgresql://localhost:5432/postgres
    username: postgres
    password: password

This marks our easy “ Extract and Load ” software full. Now we will construct our software and run. This may first publish some random data earlier than beginning our software, then mechanically ingest the info from Kafka and push it to PostgreSQL database for additional processing. All of the code for this part will be present in GitHub.

Extract from Kafka however Persist in A number of SQL Datastores Concurrently

Sometimes, we retailer information in a single relational database. However once we are coping with massive information infrastructures, we frequently see that a number of datastores co-exist within the system and we should always concentrate on how we will configure single software to ingest related information to a number of datastores. On this instance, we are going to lengthen the earlier use-case to deal with a number of SQL based mostly datastores. We’ll add MySQL together with our current PostgreSQL occasion.

So, let’s rapidly begin with our implementation. We might want to add the MySQL driver to our current pom.xml:

<dependency>
    <groupId>mysql</groupId>
	<artifactId>mysql-connector-java</artifactId>
	<scope>runtime</scope>
</dependency>

Now, if we bear in mind our earlier datasource configuration was outlined in software.yml like this:

spring:
  jpa:
    show-sql: ...
    hibernate:
      ddl-auto: ...
  datasource:
    url: ...
    username: ...
    password: ...

Internally Spring maps these settings to an occasion of org.springframework.boot.autoconfigure.jdbc.DataSourceProperties. We will’t use that as we have to declare a number of beans with completely different mappings inside Spring’s software context. So we have to outline a separate properties file that may maintain all of the datasource properties:

mysql.jdbc.dialect=org.hibernate.dialect.MySQL8Dialect
mysql.jdbc.driverClassName=com.mysql.cj.jdbc.Driver
mysql.jdbc.url=jdbc:mysql://localhost:3306/check
mysql.jdbc.consumer=root
mysql.jdbc.go=password

postgres.jdbc.dialect=org.hibernate.dialect.PostgreSQLDialect
postgres.jdbc.driverClassName=org.postgresql.Driver
postgres.jdbc.url=jdbc:postgresql://localhost:5432/postgres
postgres.jdbc.consumer=postgres
postgres.jdbc.go=password

hibernate.generate-ddl=true
hibernate.show_sql=true
hibernate.hbm2ddl.auto=replace
hibernate.cache.use_second_level_cache=false
hibernate.cache.use_query_cache=false

Now we have to load this properties and instantiate a number of beans for our transaction supervisor. First we are going to outline our PostgreSQLConfig:

@Configuration
@PropertySource({ "classpath:persistence-multiple-db.properties" })
@EnableJpaRepositories(
        basePackages = "com.stackabuse.kafka_data_pipeline.postgresql.repository",
        entityManagerFactoryRef = "postgresEntityManagerFactory",
        transactionManagerRef = "postgresTransactionManager"
)
public class PostgreSQLConfig {

    @Autowired
    personal Surroundings setting;

    @Main
    @Bean
    public LocalContainerEntityManagerFactoryBean postgresEntityManagerFactory() {

        LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
        em.setDataSource(postgresDataSource());
        em.setPackagesToScan("com.stackabuse.kafka_data_pipeline.mannequin");
        em.setPersistenceUnitName("mannequin");

        HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
        em.setJpaVendorAdapter(vendorAdapter);
        em.afterPropertiesSet();

        HashMap<String, Object> properties = new HashMap<>();
        properties.put("hibernate.hbm2ddl.auto", setting.getProperty("hibernate.hbm2ddl.auto"));
        properties.put("hibernate.generate-ddl", setting.getProperty("hibernate.generate-ddl"));
        properties.put("hibernate.show_sql", setting.getProperty("hibernate.show_sql"));
        properties.put("hibernate.dialect", setting.getProperty("postgres.jdbc.dialect"));
        em.setJpaPropertyMap(properties);

        return em;
    }

    @Main
    @Bean
    public DataSource postgresDataSource() {

        DriverManagerDataSource dataSource = new DriverManagerDataSource();
        dataSource.setDriverClassName(
                Objects.requireNonNull(setting.getProperty("postgres.jdbc.driverClassName"))
        );
        dataSource.setUrl(setting.getProperty("postgres.jdbc.url"));
        dataSource.setUsername(setting.getProperty("postgres.jdbc.consumer"));
        dataSource.setPassword(setting.getProperty("postgres.jdbc.go"));

        return dataSource;
    }

    @Main
    @Bean
    public PlatformTransactionManager postgresTransactionManager(EntityManagerFactory entityManagerFactory) {

        JpaTransactionManager transactionManager = new JpaTransactionManager();
        transactionManager.setEntityManagerFactory(postgresEntityManagerFactory().getObject());
        return transactionManager;
    }
}

This hundreds the properties from the beforehand outlined persistence-multiple-db.properties file and the entity mannequin from our beforehand outlined Inventory and StockProfile entity. As you may discover, since our PostgreSQL database is the first datastore that we have to retailer the info first, we’ve got marked all our beans on this config as Main.

Equally, we are going to instantiate the MySQL beans to configure its transaction supervisor:

@Configuration
@PropertySource({ "classpath:persistence-multiple-db.properties" })
@EnableJpaRepositories(
        basePackages = "com.stackabuse.kafka_data_pipeline.mysql.repository",
        entityManagerFactoryRef = "mysqlEntityManagerFactory",
        transactionManagerRef = "mysqlTransactionManager"
)
public class MySQLConfig {

    @Autowired
    personal Surroundings setting;

    @Bean
    public LocalContainerEntityManagerFactoryBean mysqlEntityManagerFactory() {

        LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
        em.setDataSource(mysqlDataSource());
        em.setPackagesToScan("com.stackabuse.kafka_data_pipeline.mannequin");
        em.setPersistenceUnitName("mannequin");

        HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
        em.setJpaVendorAdapter(vendorAdapter);
        em.afterPropertiesSet();

        HashMap<String, Object> properties = new HashMap<>();
        properties.put("hibernate.hbm2ddl.auto", setting.getProperty("hibernate.hbm2ddl.auto"));
        properties.put("hibernate.generate-ddl", setting.getProperty("hibernate.generate-ddl"));
        properties.put("hibernate.show_sql", setting.getProperty("hibernate.show_sql"));
        properties.put("hibernate.dialect", setting.getProperty("mysql.jdbc.dialect"));
        em.setJpaPropertyMap(properties);

        return em;
    }

    @Bean
    public DataSource mysqlDataSource() {

        last DriverManagerDataSource dataSource = new DriverManagerDataSource();
        dataSource.setDriverClassName(
                Objects.requireNonNull(setting.getProperty("mysql.jdbc.driverClassName"))
        );
        dataSource.setUrl(setting.getProperty("mysql.jdbc.url"));
        dataSource.setUsername(setting.getProperty("mysql.jdbc.consumer"));
        dataSource.setPassword(setting.getProperty("mysql.jdbc.go"));

        return dataSource;
    }

    @Bean
    public PlatformTransactionManager mysqlTransactionManager(EntityManagerFactory entityManagerFactory) {

        last JpaTransactionManager transactionManager = new JpaTransactionManager();
        transactionManager.setEntityManagerFactory(mysqlEntityManagerFactory().getObject());
        return transactionManager;
    }
}

Subsequent, we have to outline separate repository for MySQL occasion:

public interface StockMysqlRepository extends JpaRepository<Inventory, Lengthy> {
}
public interface StockProfileMysqlRepository extends JpaRepository<StockProfile, String> {
}

Then we have to replace our service name to incorporate MySQL repo in order that it could actually synchronize and concurrently retailer the info into MySQL together with PostgreSQL as quickly as some information is revealed into Kafka matter.

@Slf4j
@Service
public class ConsumerService {

    @Autowired
    StockPostgresRepository stockPostgresRepo;

    @Autowired
    StockMysqlRepository stockMysqlRepo;

    @Autowired
    StockProfilePostgresRepository stockProfilePostgresRepo;

    @Autowired
    StockProfileMysqlRepository stockProfileMysqlRepo;

    @KafkaListener(matters = "shares",
            groupId = "${spring.kafka.client.group}",
            autoStartup = "true",
            containerFactory = "kafkaListenerContainerFactory")
    public void consumeStockJson(String message) throws JsonProcessingException {
        ObjectMapper mapper = new ObjectMapper();
        JsonNode json = mapper.readTree(message);
        Inventory inventory = new Inventory();
        inventory.setId(json.findValue("id").textValue());
        inventory.setDate(json.findValue("date").textValue());
        inventory.setCompany(json.findValue("firm").textValue());
        inventory.setOpen(json.findValue("open").floatValue());
        inventory.setClose(json.findValue("shut").floatValue());
        inventory.setHigh(json.findValue("excessive").floatValue());
        inventory.setLow(json.findValue("low").floatValue());
        inventory.setVolume(json.findValue("quantity").intValue());
        stockPostgresRepo.save(inventory);
        stockMysqlRepo.save(inventory);
        log.information("Consumed Inventory message: " + inventory);
    }

    @KafkaListener(matters = "stocks_profiles",
            groupId = "${spring.kafka.client.group}",
            autoStartup = "true",
            containerFactory = "kafkaListenerContainerFactory")
    public void consumeStockProfileJson(String message) throws JsonProcessingException {
        ObjectMapper mapper = new ObjectMapper();
        JsonNode json = mapper.readTree(message);
        StockProfile stockProfile = new StockProfile();
        stockProfile.setId(json.findValue("id").textValue());
        stockProfile.setCompany(json.findValue("firm").textValue());
        stockProfile.setProfession(json.findValue("career").textValue());
        stockProfile.setSector(json.findValue("sector").textValue());
        stockProfile.setAddress(json.findValue("deal with").textValue());
        stockProfile.setRegistration(json.findValue("registration").textValue());
        stockProfilePostgresRepo.save(stockProfile);
        stockProfileMysqlRepo.save(stockProfile);
        log.information("Consumed Inventory Profile message: " + stockProfile);
    }
}

Lastly, we have to add few annotations to our fundamental class in order that it could actually uncover and register all of the required beans:

@SpringBootApplication
@ComponentScan
@ConfigurationPropertiesScan("com.stackabuse.kafka_data_pipeline.config")
public class KafkaDataPipelineApplication {

	public static void fundamental(String[] args) {
		SpringApplication.run(KafkaDataPipelineApplication.class, args);
	}

}

Now we will merely host a MySQL docker occasion for our testing:

docker run -itd --name mysql -d 
    -p 3306:3306 
    -e MYSQL_ROOT_PASSWORD=password 
    -v mysql:/var/lib/mysql 
    mysql

This completes our multi-datastore configuration. Now we will construct our software and run. This may first publish some random data earlier than beginning our software, then mechanically ingest the info from Kafka and push it concurrently to each PostgreSQL and MySQL database for additional processing.

You could find the entire above code as a part of GitHub.

Extract from Kafka and Remodel To and From Avro and vice-versa utilizing Confluent Schema Registry

Within the above examples, we explored round “Extract and Load” information pipeline the place we had been merely consuming the info from Kafka and ingested into our SQL database. However in real-time manufacturing, this gained’t be the case. We would have to course of, remodel or cleanse that information earlier than storing it again to our information lake or warehouse. Thus, there’s a excessive probability of using some seamless transformation mechanism which will be built-in simply with Kafka.

Two issues will occur as soon as apps are actively sending messages to Apache Kafka and receiving messages from it. New customers of present matters will first begin to seem. These are brand-new apps that might want to comprehend the format of the messages within the topic; they could have been created by the identical staff that wrote the unique producer of the messages, or they could have been written by some one other staff. Second, because the group develops, so will the construction and complexity of these messages. New fields are added to the present objects, total string is separated into a number of strings, and so forth. Our object schema is a shifting goal, therefore we want a mechanism to decide on the message schema for each specific matter. That is the place Confluent Schema Registry involves rescue.

What’s Schema Registry?

Based on Confluent.io : The Schema Registry is a server that shops a versioned historical past of all schemas and permits for the evolution of schemas in response to the configured compatibility settings and expanded Avro assist.

This operates independently from the present Kafka brokers on a distinct system. Its sole duty is to maintain a database of all of the schemas which have been revealed to the matters of the cluster. This “database” is cached within the Schema Registry and saved in an inside Kafka matter for low-latency entry.

Anyone would usually need to forestall conditions the place garbage goes in and rubbish comes out. By permitting message producers to stick to a specific schema construction, Schema Registry prevents them from pushing inappropriate messages into matters. This spares the downstream buyer quite a lot of trouble. Thus, a messaging platform’s potential to implement information governance depends upon the schema registry.

Schema Registry additionally serves an API that permits us to examine whether or not the message produced or consumed in Kafka matter matches or is suitable with the outdated model. Thus producers or customers can validate the incoming message with the Schema Registry and resolve if that ought to be revealed or rejected.

Confluent Schema Registry helps schemas of sort: Avro, JSON and Protobuf. On this instance, we are going to give attention to Avro schema and mess around transformation between completely different objects with Avro format.

Schema Registry

What’s Avro?

Based on Avro.Apache.org : Apache Avro™ is an information serialization system.

It supplies:

  • Wealthy information constructions.
  • A compact, quick, binary information format.
  • A container file, to retailer persistent information.
  • Distant process name (RPC).
  • Easy integration with dynamic languages.

Now we are going to first attempt to host a Schema Registry from Confluent after which implement our code to combine with it. We will even attempt to host the Kafka dealer and zookeeper by Confleunt. A fast docker-compose would look one thing like this:

model: "3.3"
companies:
  zookeeper:
    picture: confluentinc/cp-zookeeper:5.3.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    setting:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  dealer:
    picture: confluentinc/cp-enterprise-kafka:5.3.1
    container_name: dealer
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    setting:
      KAFKA_BROKER_ID: 1
      KAFKA_ADVERTISED_HOST_NAME: localhost
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://dealer:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: dealer:29092
      CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
      CONFLUENT_METRICS_ENABLE: 'true'
      CONFLUENT_SUPPORT_CUSTOMER_ID: 'nameless'

  schema-registry:
    picture: confluentinc/cp-schema-registry:5.3.1
    container_name: schema-registry
    depends_on:
      - zookeeper
      - dealer
    ports:
      - "8081:8081"
    setting:
      SCHEMA_REGISTRY_HOST_NAME: localhost
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'

We will then run this docker-compose occasion by merely executing:

docker-compose up

Now as soon as we’ve got our situations up and operating, let’s begin with our implementation. So initially, let’s initialize the Spring Boot software utilizing Spring Initializr:

Spring Initializr Kafka Schema Registry

Now we have added Spring Internet, Spring for Apache Kafka so as to add libraries or modules for Kafka and Lombok. We would want further libraries from Apache and Confluent. The libraries that we’ll use from Confluent isn’t current as a part of the maven repositories. Therefore, first we have to add a repository to our pom.xml:

<repositories>
	<repository>
		<id>confluent</id>
		<url>https://packages.confluent.io/maven/</url>
	</repository>
</repositories>

Then we will add the next libraries for Avro and its serialization and deserialization course of:

<dependency>
	<groupId>org.apache.avro</groupId>
	<artifactId>avro</artifactId>
	<model>1.11.1</model>
	<scope>compile</scope>
</dependency>
<dependency>
	<groupId>org.apache.avro</groupId>
	<artifactId>avro-compiler</artifactId>
	<model>1.11.1</model>
</dependency>
<dependency>
	<groupId>org.apache.avro</groupId>
	<artifactId>avro-maven-plugin</artifactId>
	<model>1.11.1</model>
</dependency>
<dependency>
	<groupId>io.confluent</groupId>
	<artifactId>kafka-avro-serializer</artifactId>
	<model>5.3.0</model>
</dependency>
<dependency>
	<groupId>io.confluent</groupId>
	<artifactId>monitoring-interceptors</artifactId>
	<model>5.3.0</model>
</dependency>

Lastly, we’d additionally have to outline maven targets in pom.xml to generate last POJO from the Avro schema file. This may be outlined as follows:

<construct>
	<plugins>
		<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
			<configuration>
				<excludes>
					<exclude>
						<groupId>org.projectlombok</groupId>
						<artifactId>lombok</artifactId>
					</exclude>
				</excludes>
			</configuration>
		</plugin>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-resources-plugin</artifactId>
			<model>3.1.0</model>
		</plugin>
		<plugin>
			<groupId>org.codehaus.mojo</groupId>
			<artifactId>build-helper-maven-plugin</artifactId>
			<model>3.1.0</model>
			<executions>
				<execution>
					<section>generate-sources</section>
					<targets>
						<objective>add-source</objective>
					</targets>
					<configuration>
						<sources>
							<supply>${undertaking.construct.listing}/generated-sources/avro</supply>
						</sources>
					</configuration>
				</execution>
			</executions>
		</plugin>
		<plugin>
			<groupId>org.apache.avro</groupId>
			<artifactId>avro-maven-plugin</artifactId>
			<model>1.11.1</model>
			<executions>
				<execution>
					<id>schemas</id>
					<section>generate-sources</section>
					<targets>
						<objective>schema</objective>
						<objective>protocol</objective>
						<objective>idl-protocol</objective>
					</targets>
					<configuration>
						<sourceDirectory>${undertaking.basedir}/src/fundamental/assets/</sourceDirectory>
						<outputDirectory>${undertaking.basedir}/src/fundamental/java/</outputDirectory>
					</configuration>
				</execution>
			</executions>
		</plugin>
	</plugins>
</construct>

This completes our pom.xml adjustments.

For this instance, we are going to play with Order information and attempt to construct an e-commerce messaging system. So, let’s first outline an Avro schema. It principally comprises 5 core attributes:

  • sort: sort of file(file by default)
  • namespace: location of schema
  • identify: identify of the file
  • model: schema model
  • fields: fields within the file with their corresponding information varieties
{
  "sort": "file",
  "namespace": "com.stackabuse.kafka_schema_registry.schema",
  "identify": "Order",
  "model": "1",
  "fields": [
    {
      "name": "id",
      "type": "string",
      "doc": "Id of the order"
    },
    {
      "name": "product_name",
      "type": "string",
      "doc": "Name of the product"
    },
    {
      "name": "product_id",
      "type": "string",
      "doc": "Id of the product"
    },
    {
      "name": "product_type",
      "type": "string",
      "doc": "Type of the product"
    },
    {
      "name": "product_count",
      "type": "int",
      "doc": "Count of Product Items"
    },
    {
      "name": "listing_date",
      "type": "string",
      "doc": "Date when the Product is listed"
    },
    {
      "name": "customer_id",
      "type": "string",
      "doc": "ID of the customer"
    },
    {
      "name": "customer_email",
      "type": "string",
      "doc": "Email ID of the customer"
    },
    {
      "name": "customer_name",
      "type": "string",
      "doc": "Name of the customer"
    },
    {
      "name": "customer_mobile",
      "type": "string",
      "doc": "Mobile number of customer"
    },
    {
      "name": "shipping_address",
      "type": "string",
      "doc": "Address of the customer to be shipped"
    },
    {
      "name": "shipping_pincode",
      "type": "string",
      "doc": "Pincode to be shipped"
    },
    {
      "name": "status",
      "type": "string",
      "doc": "Status of the order"
    },
    {
      "name": "price",
      "type": "float",
      "doc": "Total price of the order"
    },
    {
      "name": "weight",
      "type": "float",
      "doc": "Weight of the items"
    },
    {
      "name": "automated_email",
      "type": "boolean",
      "default": true,
      "doc": "Field indicating if the user is enrolled in marketing emails"
    }
  ]
}

Avro helps two varieties of information varieties:

  • Primitive: Avro helps primitive datatypes like: string, int, float, and so forth.
  • Complicated: Avro helps complicated datatypes like: data, enums, arrays, maps, unions and mounted.

In our instance, we’re utilizing primitive datatypes: string, int and float whereas complicated datatypes like file. Now, we will generate Java supply code utilizing the maven plugin:

mvn generate-sources

This may create the POJOs within the namespace location outlined as a part of the schema. Now this generated code will come useful to outline our objects and alternate messages.

Now, let’s outline a pretend Producer service that may publish some information in our Kafka matter:

@Slf4j
@Service
public class ProducerService {

    @Autowired
    personal KafkaTemplate<String, Order> kafkaTemplate;

    @PostConstruct
    @Transactional
    public void sendMessage() throws ParseException {
        for (int i = 0; i < 10; i++) {
            Order order = Order.newBuilder()
                    .setId(UUID.randomUUID().toString())
                    .setProductName(Faker.occasion().commerce().productName())
                    .setProductId(Faker.occasion().idNumber().ssnValid())
                    .setProductType(Faker.occasion().commerce().division())
                    .setProductCount(Faker.occasion().random().nextInt(1, 5))
                    .setListingDate(Immediate
                            .ofEpochMilli(Faker
                                    .occasion()
                                    .date()
                                    .previous(3, TimeUnit.DAYS)
                                    .getTime())
                            .atZone(ZoneId.systemDefault())
                            .toLocalDateTime()
                            .toString())
                    .setCustomerId(Faker.occasion().idNumber().invalid())
                    .setCustomerName(Faker.occasion().artist().identify())
                    .setCustomerEmail(Faker.occasion().web().emailAddress())
                    .setCustomerMobile(Faker.occasion().phoneNumber().cellPhone())
                    .setShippingAddress(Faker.occasion().deal with().fullAddress())
                    .setShippingPincode(Faker.occasion().deal with().zipCode())
                    .setStatus("ORDERED")
                    .setPrice(Float.parseFloat(Faker.occasion().commerce().value()))
                    .setWeight(Float.parseFloat(Faker.occasion().commerce().value()))
                    .setAutomatedEmail(true)
                    .construct();

            ListenableFuture<SendResult<String, Order>> future =
                    kafkaTemplate.ship("schema-registry-order", order.getId().toString(), order);
            future.addCallback(new ListenableFutureCallback<SendResult<String, Order>>() {

                @Override
                public void onSuccess(last SendResult<String, Order> message) {
                    log.information("despatched message= " + message + " with offset= " + message.getRecordMetadata().offset());
                }

                @Override
                public void onFailure(last Throwable throwable) {
                    log.error("unable to ship message= " + order, throwable);
                }
            });
        }
    }
}

This may publish Order information in our matter in Avro format which we will devour and course of for our downstream software. Subsequent, we will outline a easy Client service that may devour this messages from Kafka, deserialize the Avro data to our POJO and log it.

@Slf4j
@Service
public class ConsumerService {

    @KafkaListener(matters = "schema-registry-order",
            autoStartup = "true")
    public void consumeOrderMessage(Order order) {
        log.information("Consumed Order message: {}", order);
    }
}

Now, we have to outline Kafka properties as a part of software.yml which is able to create our KafkaTemplate and KafkaListener beans:

spring:
  kafka:
    properties:
      bootstrap.servers: localhost:9092
      schema.registry.url: http://localhost:8081
    client:
      group-id: schema-registry-group
      auto-offset-reset: newest
      key-deserializer: org.apache.kafka.frequent.serialization.StringDeserializer
      value-deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
      properties:
        particular.avro.reader: true
    producer:
      key-serializer: org.apache.kafka.frequent.serialization.StringSerializer
      value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer

In case you discover, we’re utilizing the KafkaAvroSerializer and KafkaAvroDeserializer to serialize and deserialize our data respectively.

Lastly, we are going to outline annotations in our fundamental class that may allow Kafka and scan the parts to instantiate our beans:

@EnableKafka
@ComponentScan
@SpringBootApplication
public class KafkaSchemaRegistryApplication {

	public static void fundamental(String[] args) {
		SpringApplication.run(KafkaSchemaRegistryApplication.class, args);
	}

	@Bean
	NewTopic newTopic() {
		return new NewTopic("schema-registry-order", 3, (brief) 1);
	}

}

We will even create our matters that’s required for exchanging our data.

This marks our “Extract and Remodel” full. Now we will construct our software and run it. This may first publish some random data earlier than beginning our software, then mechanically ingest the info from Kafka and remodel it to outlined POJOs and log it in our logs. All of the code for this part will be present in GitHub.

Extract from Kafka and Remodel To and From Avro in Spring Cloud Stream with Schema Registry and Kafka

In our earlier instance, we seemed right into a generic Spring Boot implementation to finish our use-case. However in real-time manufacturing, we’d require an infrastructure which will be extremely scalable and event-driven. Thus, we will use Spring Cloud Stream.

A framework for creating extremely scalable, event-driven microservices linked to shared messaging programs is known as Spring Cloud Stream. The parts supplied by Spring Cloud Stream summary away from the code the interplay with a number of message brokers.

Builders can merely think about writing code that emits and receives messages with out having to construct code particularly for a message dealer. These parts summary away interactions with message brokers. Consequently, switching from one message dealer to a different will be as easy as altering your code’s dependencies or libraries.

You’ll be able to merely check your code as nicely as a result of Spring Cloud Stream is a part of the Spring framework. Spring integrates nicely along with your current CI course of because it gives methods to execute assessments with out having to straight hook up with a message dealer.

It presents three fundamental parts:

  1. Binder – That is answerable for communication with a selected message dealer. For instance, they supply RabbitMQ Binder, a Kafka Binder, and so forth.
  2. Binding – That is an interface for sending and receiving messages. This part connects the summary channels in our code with a subject or queue that’s dealt with by the binder.
  3. Message – That is the precise information or the info construction that’s used to speak with the bindings between our code and our message dealer.

As we will see on this case, the binder is related to 2 logical bindings, one for enter and one for output. The contact with the message dealer is then dealt with straight by the binder. Spring will summary away as a lot code as attainable. We’d solely want to fret about establishing a connection to the dealer, creating the channels, and defining how all the pieces routes.

In its default configuration, Spring Cloud Stream will serialise your information as finest it could actually and assemble channels and names mechanically if they do not exist already. Nevertheless, all of that is changeable and adaptable, and as with a whole lot of Spring, it makes the perfect judgements it could actually given the info it has.

On this instance, we are going to attempt to implement the identical “Extract and Remodel” mechanism utilizing Spring Cloud Stream. We’ll outline a schema and use Schema Registry to validate, serialize and deserialize our information. So, let’s begin with our implementation. To start with, let’s initialize the Spring Boot software.

We’ll add Spring Cloud Stream Kafka Binder so as to add streaming libraries or modules for Kafka, Spring Cloud Stream Schema and Lombok. We’d additionally want further libraries from Apache and Confluent. The libraries that we’ll use from Confluent isn’t current as a part of the maven repositories. Therefore, first we have to add a repository to our pom.xml:

<repositories>
	<repository>
		<id>confluent</id>
		<url>https://packages.confluent.io/maven/</url>
	</repository>
</repositories>

Then we will add the next libraries for Avro and its serialization and deserialization course of:

<dependencies>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-stream-binder-kafka</artifactId>
	</dependency>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-stream-schema</artifactId>
		<model>2.2.1.RELEASE</model>
	</dependency>
		<dependency>
			<groupId>io.confluent</groupId>
			<artifactId>kafka-avro-serializer</artifactId>
			<model>${kafka-avro-serializer.model}</model>
		</dependency>
		<dependency>
			<groupId>org.apache.avro</groupId>
			<artifactId>avro-compiler</artifactId>
			<model>${avro.model}</model>
		</dependency>
		<dependency>
			<groupId>org.apache.avro</groupId>
			<artifactId>avro-maven-plugin</artifactId>
			<model>${avro.model}</model>
		</dependency>
		<dependency>
			<groupId>org.apache.avro</groupId>
			<artifactId>avro</artifactId>
			<model>${avro.model}</model>
			<scope>compile</scope>
		</dependency>
		<dependency>
			<groupId>org.projectlombok</groupId>
			<artifactId>lombok</artifactId>
			<non-compulsory>true</non-compulsory>
		</dependency>
		<dependency>
			<groupId>org.yaml</groupId>
			<artifactId>snakeyaml</artifactId>
			<model>1.31</model>
		</dependency>
		<dependency>
			<groupId>com.github.javafaker</groupId>
			<artifactId>javafaker</artifactId>
			<model>1.0.2</model>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>check</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-stream</artifactId>
			<scope>check</scope>
			<classifier>test-binder</classifier>
			<sort>test-jar</sort>
		</dependency>
</dependencies>

Lastly, we’d additionally have to outline maven targets in pom.xml to generate last POJO from the Avro schema file. This may be outlined as follows:

<construct>
	<plugins>
		<plugin>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-maven-plugin</artifactId>
			<configuration>
				<excludes>
					<exclude>
						<groupId>org.projectlombok</groupId>
						<artifactId>lombok</artifactId>
					</exclude>
				</excludes>
			</configuration>
		</plugin>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-resources-plugin</artifactId>
			<model>3.1.0</model>
		</plugin>
		<plugin>
			<groupId>org.codehaus.mojo</groupId>
			<artifactId>build-helper-maven-plugin</artifactId>
			<model>3.1.0</model>
			<executions>
				<execution>
					<section>generate-sources</section>
					<targets>
						<objective>add-source</objective>
					</targets>
					<configuration>
						<sources>
							<supply>${undertaking.construct.listing}/generated-sources/avro</supply>
						</sources>
					</configuration>
				</execution>
			</executions>
		</plugin>
		<plugin>
			<groupId>org.apache.avro</groupId>
			<artifactId>avro-maven-plugin</artifactId>
			<model>1.11.1</model>
			<executions>
				<execution>
					<id>schemas</id>
					<section>generate-sources</section>
					<targets>
						<objective>schema</objective>
						<objective>protocol</objective>
						<objective>idl-protocol</objective>
					</targets>
					<configuration>
						<sourceDirectory>${undertaking.basedir}/src/fundamental/assets/</sourceDirectory>
						<outputDirectory>${undertaking.basedir}/src/fundamental/java/</outputDirectory>
					</configuration>
				</execution>
			</executions>
		</plugin>
	</plugins>
</construct>

This completes our pom.xml adjustments.

On this instance, we are going to mess around information with respect to Clients as a part of identical e-commerce software. However, we will even have yet another schema that may maintain the Composite Key idea of Buyer in order that we will publish distinctive data with this key in Kafka matter. Let’s outline our schema information:

{
  "sort": "file",
  "identify": "Buyer",
  "namespace": "com.stackabuse.kafka_spring_cloud_stream_schema_registry.schema",
  "fields": [
    {
      "name": "id",
      "type": "int",
      "doc": "Id of the customer"
    },
    {
      "name": "firstName",
      "type": "string",
      "doc": "First Name of the customer"
    },
    {
      "name": "lastName",
      "type": "string",
      "doc": "Last Name of the customer"
    },
    {
      "name": "department",
      "type": "string",
      "default": "IT ",
      "doc": "Department of the customer"
    },
    {
      "name": "designation",
      "type": "string",
      "default": "Software Engineer",
      "doc": "Designation of the customer"
    }
  ]
}
{
  "sort": "file",
  "identify": "CustomerKey",
  "namespace": "com.stackabuse.kafka_spring_cloud_stream_schema_registry.schema",
  "fields": [
    {
      "name": "id",
      "type": "int",
      "doc": "Id of the customer"
    },
    {
      "name": "departmentName",
      "type": "string",
      "doc": "Department of the customer"
    }
  ]
}

Now, as we did in our earlier instance, we will generate Java supply code utilizing the maven plugin:

mvn generate-sources

This may create the POJOs below schema package deal.

Now, we are going to outline the configuration to load Confluent Schema Registry Endpoint into our code:

@Configuration
public class SchemaRegistryConfig {

    @Bean
    public SchemaRegistryClient schemaRegistryClient(
            @Worth("${spring.cloud.stream.kafka.binder.producer-properties.schema.registry.url}")
                    String endPoint) {
        ConfluentSchemaRegistryClient consumer = new ConfluentSchemaRegistryClient();
        consumer.setEndpoint(endPoint);
        return consumer;
    }
}

Subsequent, we are going to outline a operate to provide our messages into Kafka matter. This may generate some pretend data utilizing Avro schema:

@Slf4j
@Service
public class ProducerService {

    @Autowired
    personal StreamBridge streamBridge;

    @Scheduled(cron = "*/10 * * * * *")
    public void producer() {

        for (int i = 0; i < 10; i++) {
            String division = Faker.occasion().commerce().division();
            Buyer buyer = Buyer.newBuilder()
                    .setId(i)
                    .setFirstName(Faker.occasion().mates().character())
                    .setLastName(Faker.occasion().gameOfThrones().character())
                    .setDepartment(division)
                    .setDesignation(Faker.occasion().firm().career())
                    .construct();

            CustomerKey customerKey = CustomerKey.newBuilder()
                    .setId(i)
                    .setDepartmentName(division)
                    .construct();
            log.information("Producing message: {}", buyer);
            streamBridge.ship("producer-out-0", MessageBuilder.withPayload(buyer)
                    .setHeader(KafkaHeaders.MESSAGE_KEY, customerKey)
                    .construct());
        }
    }
}

We’re utilizing StreamBridge right here to ship the data to “producer-out-0” channel. Subsequent, we are going to outline a Client operate to devour messages from incoming channel and log it in our logs:

@Slf4j
@Service
public class ConsumerService {

    @Bean
    public Client<Buyer> client() {
        return c -> log.information("Consumed buyer particulars: {}", c);
    }
}

Subsequent, we are going to outline our Spring Cloud Stream config to outline the Kafka binder and bindings:

spring:
  fundamental:
    allow-bean-definition-overriding: true
  cloud:
    operate:
      definition: producer;client;
    stream:
      bindings:
        consumer-in-0:
          client:
            use-native-decoding: true # Permits utilizing the customized deserializer
            max-attempts: 3
            back-off-initial-interval: 100
          vacation spot: customer-details
          content-type: software/*+avro
          group: group-customer
          concurrency: 3
        producer-out-0:
          vacation spot: customer-details
          content-type: software/*+avro
          producer:
            useNativeEncoding: true # Permits utilizing the customized serializer
      kafka:
        binder:
          brokers: localhost:9092
          autoCreateTopics: true
          autoAddPartitions: true
          replication-factor: 1
          configuration:
            processing.assure: exactly_once
            isolation.stage: read_committed
            commit.interval.ms: 1000
          producer-properties:
            key.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
            worth.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
            schema.registry.url: http://localhost:8081
            acks: all
            allow.idempotence: true
          consumer-properties:
            key.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
            worth.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
            schema.registry.url: http://localhost:8081
            particular.avro.reader: true
            enable.auto.create.matters: true

As a result of we’re going to serialize messages utilizing Apache Avro format, we have to change a default content material sort for all matters to software/*-avro. The message is shipped with a contentType header by utilizing the next scheme: software/[prefix].[subject].v[version]+avro, the place prefix is configurable and topic is deduced from the payload sort. The default prefix is vnd, and because the identify of a message class is Buyer the worth of the header could be software/vnd.buyer.v1+avro for the v1 model of schema or software/vnd.buyer.v2+avro for the v2 model.

Lastly, we are going to outline annotations to instantiate required beans for our software in fundamental class:

@EnableScheduling
@EnableSchemaRegistryClient
@SpringBootApplication
public class KafkaSpringCloudStreamSchemaRegistryApplication {

	public static void fundamental(String[] args) {
		SpringApplication.run(KafkaSpringCloudStreamSchemaRegistryApplication.class, args);
	}
}

We’re utilizing @EnableSchemaRegistryClient to allow Confluent Schema Registry Shopper and @EnableScheduling to allow cron scheduler for publishing our messages each 10 seconds. You could find the entire above code as a part of GitHub.

Extract from Kafka, Remodel to/from Avro and Load Right into a No-SQL Cassandra Database in Spring Boot

Until now, we seemed into the examples the place we’ve got constructed system that may both extract and cargo straight in database or extract and remodel into desired information format. Within the upcoming examples, we are going to attempt to construct information pipeline that might be one thing equal to the manufacturing use-cases. We’ll extract the info from Kafka, remodel into desired format and cargo it right into a No-SQL database for sooner processing. Now we have already seemed into SQL databases like PostgreSQL or MySQL, however now we are going to have a look into probably the most common database used for information warehouse in varied group or merchandise.

Why Cassandra?

Based on its official web page, “Apache Cassandra is an open supply, distributed, decentralized, elastically scalable, extremely accessible, fault-tolerant, tunably constant, row-oriented database. Cassandra bases its distribution design on Amazon’s Dynamo and its information mannequin on Google’s Bigtable, with a question language much like SQL. Created at Fb, it now powers cloud-scale purposes throughout many industries.”

Let’s break it down into its traits:

  • Distributed and Decentralized: It’s distributed, which means it could run on a number of machines and nonetheless seem to customers as a single entity. Cassandra can also be decentralized, which suggests it doesn’t have single level of failure. Decentralization has two main advantages: it’s simpler to make use of than major and secondary node idea, and it assists in stopping outages. As a result of all of the nodes are identical, a decentralized retailer is less complicated to run and handle than a major/secondary retailer. There’s not a lot of a distinction between placing up one and hundred node. Practically no configuration is important to assist it. Cassandra’s similar copies be certain that service isn’t interrupted even within the occasion of a node loss. This in flip helps to attain excessive availability.
  • Elastic Stability: Cassandra helps elastic scalability which is a property of horizontal scalability. It signifies that your cluster can scale up and down with none noticeable lag. To do that, the cluster should have the ability to settle for further nodes that may be part of by acquiring a duplicate of some or the entire information and start responding to consumer requests with out considerably disrupting the cluster or requiring a cluster-wide reconfiguration. Your course of doesn’t have to be restarted.
  • Excessive Availability and Fault Tolerance: Cassandra is very accessible. We will exchange the corrupted or failed nodes in a cluster simply with none downtime. We will additionally replicate our information to a number of information facilities throughout completely different geographical area in order that it could actually provide native efficiency and stop downtime throughout any catastrophe.
  • Tunable Consistency: Cassandra is healthier described as “eventual constant” because it allows us to decide on the quantity of consistency we want whereas balancing the supply stage. In a distributed system, eventual consistency signifies that each one updates will finally unfold to all of the copies, however often this course of might take a while. However, relaxation assured, all copies will finally be constant.
  • Row Oriented: The info mannequin of Cassandra could also be characterised as a partitioned row retailer, the place data is saved in sparse multidimensional hash tables.
  • Excessive Efficiency: Cassandra was constructed notably to run throughout a number of dozens of multiprocessor/multicore computer systems located in lots of information facilities and to totally use their capabilities. It grows easily and persistently to a number of terabytes. It has been demonstrated that Cassandra performs remarkably nicely below heavy load and likewise demonstrates very excessive throughput for writes per second. We will protect all of Cassandra’s advantageous traits with out compromising efficiency as we add extra servers.

Cassandra is maybe extra appropriate for:

  • Information processing in real-time.
  • Excessive pace, always accessible, swift on-line transactions and different sooner purposes.
  • Managing a whole lot of concurrent visitors and engagement.
  • Functions requiring excessive availability for large-scale reads.

So let’s first instantiate a Cassandra cluster after which we are going to begin with our implementation. We will rapidly outline a docker-compose file to deploy a Cassandra cluster:

model: '3'

companies:
  cassandra:
    picture: cassandra:4.0
    ports:
      - '7000:7000'
      - '9042:9042'
    volumes:
      -  '~/apps/cassandra:/var/lib/cassandra'
    healthcheck:
      check: [ "CMD", "/opt/cassandra/bin/cqlsh", "-u cassandra", "-p cassandra" ,"-e "describe keyspaces"" ]
      interval: 15s
      timeout: 10s
      retries: 10
    setting:
      - CASSANDRA_CLUSTER_NAME=stackabuse
      - CASSANDRA_SEEDS=cassandra
      - CASSANDRA_PASSWORD_SEEDER=sure
      - CASSANDRA_PASSWORD=cassandra

Then we will run our occasion by executing:

docker-compose up

Now, let’s begin with our implementation. First, we have to outline our pom.xml in the same manner that we outlined in our earlier examples. We’ll add the Avro and Confluent Schema registry associated libraries. Subsequent, we are going to add the next libraries for Spring Kafka, Spring Cassandra and Spring Internet:

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-cassandra</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
	<groupId>org.springframework.kafka</groupId>
	<artifactId>spring-kafka</artifactId>
</dependency>

We’ll use the Order information for this instance as we did earlier in our earlier instance. So let’s rapidly outline our schema file:

{
  "sort": "file",
  "namespace": "com.stackabuse.kafka_spring_boot_cassandra.schema",
  "identify": "Order",
  "model": "1",
  "fields": [
    {
      "name": "id",
      "type": "string",
      "doc": "Id of the order"
    },
    {
      "name": "product_name",
      "type": "string",
      "doc": "Name of the product"
    },
    {
      "name": "product_id",
      "type": "string",
      "doc": "Id of the product"
    },
    {
      "name": "product_type",
      "type": "string",
      "doc": "Type of the product"
    },
    {
      "name": "product_count",
      "type": "int",
      "doc": "Count of Product Items"
    },
    {
      "name": "listing_date",
      "type": "string",
      "doc": "Date when the Product is listed"
    },
    {
      "name": "customer_id",
      "type": "string",
      "doc": "ID of the customer"
    },
    {
      "name": "customer_email",
      "type": "string",
      "doc": "Email ID of the customer"
    },
    {
      "name": "customer_name",
      "type": "string",
      "doc": "Name of the customer"
    },
    {
      "name": "customer_mobile",
      "type": "string",
      "doc": "Mobile number of customer"
    },
    {
      "name": "shipping_address",
      "type": "string",
      "doc": "Address of the customer to be shipped"
    },
    {
      "name": "shipping_pincode",
      "type": "string",
      "doc": "Pincode to be shipped"
    },
    {
      "name": "status",
      "type": "string",
      "doc": "Status of the order"
    },
    {
      "name": "price",
      "type": "float",
      "doc": "Total price of the order"
    },
    {
      "name": "weight",
      "type": "float",
      "doc": "Weight of the items"
    },
    {
      "name": "automated_email",
      "type": "boolean",
      "default": true,
      "doc": "Field indicating if the user is enrolled in marketing emails"
    }
  ]
}

Then we will run the next command to generate the schema below schema package deal:

mvn generate-sources

Subsequent, we are going to outline our Cassandra entity for Order in order that we will remodel the Avro information to desk content material:

@Desk
@Information
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class Order {

    @PrimaryKeyColumn(
            ordinal = 2,
            sort = PrimaryKeyType.CLUSTERED,
            ordering = Ordering.DESCENDING)
    personal UUID id;

    @PrimaryKeyColumn(
            identify = "product_name",
            ordinal = 0,
            sort = PrimaryKeyType.PARTITIONED)
    personal String productName;

    @PrimaryKeyColumn(
            identify = "product_id",
            ordinal = 1,
            sort = PrimaryKeyType.PARTITIONED)
    personal String productId;

    @Column(worth = "product_type")
    personal String productType;

    @Column(worth = "product_count")
    personal int productCount;

    @Column(worth = "listing_date")
    personal String listingDate;

    @Column(worth = "customer_id")
    personal String customerId;

    @Column(worth = "customer_email")
    personal String customerEmail;

    @Column(worth = "customer_name")
    personal String customerName;

    @Column(worth = "customer_mobile")
    personal String customerMobile;

    @Column(worth = "shipping_address")
    personal String shippingAddress;

    @Column(worth = "shipping_pincode")
    personal String shippingPincode;

    personal String standing;
    personal double value;
    personal double weight;

    @Column(worth = "automated_email")
    personal Boolean automatedEmail;
}

We’ll partition our information retailer utilizing product_name and product_id discipline in our desk. We’re additionally utilizing @Desk annotation in order that the bean is straight mapped to a Cassandra information desk. In contrast to @Id in JPA, we’ve got used @PrimaryKeyColumn in Cassandra.

Now, we are going to outline the Repository interface to create the CassandraRepository occasion:

public interface OrderRepository extends CassandraRepository<Order, UUID> {

    //Like different Database Repositories, some generally used strategies are already offered by CassandraRepository.
    //Therefore, we needn't write these right here. We will write customized strategies.
    //For instance, beneath methodology is a customized methodology.
    @AllowFiltering
    Listing<Order> findByProductName(String productName);
}

As we will discover, we’ve got used @AllowFiltering annotation which might carry out server-side filtering for any methodology. This may execute much like a question like:

SELECT * FROM order WHERE product_name = "product";

If we fail to annotate the tactic with @AllowFiltering, we are going to get the next error:

Unhealthy Request: Can not execute this question as it'd contain information filtering and thus might have unpredictable efficiency. If you wish to execute this question regardless of the efficiency unpredictability, use ALLOW FILTERING.

Subsequent, let’s outline a pretend Producer service that may publish some information in our Kafka matter:

@Slf4j
@Service
public class ProducerService {

    @Autowired
    personal KafkaTemplate<String, Order> kafkaTemplate;

    @PostConstruct
    @Transactional
    public void sendMessage() throws ParseException {
        for (int i = 0; i < 10; i++) {
            Order order = Order.newBuilder()
                    .setId(UUID.randomUUID().toString())
                    .setProductName(Faker.occasion().commerce().productName())
                    .setProductId(Faker.occasion().idNumber().ssnValid())
                    .setProductType(Faker.occasion().commerce().division())
                    .setProductCount(Faker.occasion().random().nextInt(1, 5))
                    .setListingDate(Immediate
                            .ofEpochMilli(Faker
                                    .occasion()
                                    .date()
                                    .previous(3, TimeUnit.DAYS)
                                    .getTime())
                            .atZone(ZoneId.systemDefault())
                            .toLocalDateTime()
                            .toString())
                    .setCustomerId(Faker.occasion().idNumber().invalid())
                    .setCustomerName(Faker.occasion().artist().identify())
                    .setCustomerEmail(Faker.occasion().web().emailAddress())
                    .setCustomerMobile(Faker.occasion().phoneNumber().cellPhone())
                    .setShippingAddress(Faker.occasion().deal with().fullAddress())
                    .setShippingPincode(Faker.occasion().deal with().zipCode())
                    .setStatus("ORDERED")
                    .setPrice(Float.parseFloat(Faker.occasion().commerce().value()))
                    .setWeight(Float.parseFloat(Faker.occasion().commerce().value()))
                    .setAutomatedEmail(true)
                    .construct();

            ListenableFuture<SendResult<String, Order>> future =
                    kafkaTemplate.ship("cassandra-order", order.getId().toString(), order);
            future.addCallback(new ListenableFutureCallback<SendResult<String, Order>>() {

                @Override
                public void onSuccess(last SendResult<String, Order> message) {
                    log.information("despatched message= " + message + " with offset= " + message.getRecordMetadata().offset());
                }

                @Override
                public void onFailure(last Throwable throwable) {
                    log.error("unable to ship message= " + order, throwable);
                }
            });
        }
    }
}

This may publish Order information in our matter in Avro format which we will devour and course of for our downstream software. Subsequent, we will outline a easy Client service that may devour this messages from Kafka, deserialize the Avro data to our entity sort and at last push it into the Cassandra:

@Slf4j
@Service
public class ConsumerService {

    @Autowired
    OrderRepository orderRepository;

    @KafkaListener(matters = "cassandra-order",
            autoStartup = "true")
    public void consumeOrderMessage(Order order) {
        log.information("Consumed Order message: {}", order);

        com.stackabuse.kafka_spring_boot_cassandra.entity.Order orderEntity =
                com.stackabuse.kafka_spring_boot_cassandra.entity.Order
                .builder()
                .id(UUID.fromString(order.getId().toString()))
                .productName(order.getProductName().toString())
                .productId(order.getProductId().toString())
                .productType(order.getProductType().toString())
                .productCount(order.getProductCount())
                .listingDate(order.getListingDate().toString())
                .customerId(order.getCustomerId().toString())
                .customerEmail(order.getCustomerEmail().toString())
                .customerName(order.getCustomerName().toString())
                .customerMobile(order.getCustomerMobile().toString())
                .shippingAddress(order.getShippingAddress().toString())
                .shippingPincode(order.getShippingPincode().toString())
                .standing(order.getStatus().toString())
                .value(order.getPrice())
                .weight(order.getWeight())
                .automatedEmail(order.getAutomatedEmail())
                .construct();

        com.stackabuse.kafka_spring_boot_cassandra.entity.Order savedOrder = orderRepository.save(orderEntity);
        log.information("Order saved into Cassandra: {}", savedOrder);

    }
}

Now, we are going to outline a Controller to load this information from Cassandra and present it as a part of REST API. So, we are going to create a Controller and outline three strategies: getAllOrders(), getOrderById() and createOrder():

@RestController
@RequestMapping("/api/orders")
public class OrderController {

    @Autowired
    OrderRepository orderRepository;

    @GetMapping
    public ResponseEntity<Listing<Order>> getAllOrders(@RequestParam(required = false) String productName) {
        attempt {
            Listing<Order> orders = new ArrayList<>();

            if (Objects.isNull(productName)) {
                orders.addAll(orderRepository.findAll());
            } else {
                orders.addAll(orderRepository.findByProductName(productName));
            }

            if (orders.isEmpty()) {
                return new ResponseEntity<>(HttpStatus.NO_CONTENT);
            }

            return new ResponseEntity<>(orders, HttpStatus.OK);
        } catch (Exception e) {
            return new ResponseEntity<>(null, HttpStatus.INTERNAL_SERVER_ERROR);
        }
    }

    @GetMapping("/{id}")
    public ResponseEntity<Order> getOrderById(@PathVariable("id") UUID id) {
        return orderRepository
                .findById(id)
                .map(order -> new ResponseEntity<>(order, HttpStatus.OK))
                .orElseGet(() -> new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    @PostMapping
    public ResponseEntity<Order> createOrder(@RequestBody Order order) {
        attempt {
            return new ResponseEntity<>(orderRepository.save(order), HttpStatus.CREATED);
        } catch (Exception e) {
            return new ResponseEntity<>(null, HttpStatus.INTERNAL_SERVER_ERROR);
        }
    }

}

Lastly, we are going to outline the properties for Kafka and Cassandra as a part of software.yaml:

spring:
  kafka:
    properties:
      bootstrap.servers: localhost:9092
      schema.registry.url: http://localhost:8081
    client:
      group-id: cassandra-group
      auto-offset-reset: newest
      key-deserializer: org.apache.kafka.frequent.serialization.StringDeserializer
      value-deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
      properties:
        particular.avro.reader: true
    producer:
      key-serializer: org.apache.kafka.frequent.serialization.StringSerializer
      value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer

  information:
    cassandra:
      keyspace-name: stackabuse
      contact-points:
        - localhost
      local-datacenter: datacenter1
      port: 9042
      schema-action: CREATE_IF_NOT_EXISTS
      username: admin
      password: password
      connection:
        connect-timeout: 60000ms
        read-timeout: 60000ms
      pool:
        pool-timeout: 60000ms

This marks our “Extract, Remodel and Load” pipeline full. Now we will construct our software and run it. This may first publish some random data earlier than beginning our software, then mechanically ingest the info from Kafka and remodel it to outlined Cassandra desk entities and push it into database. We will additionally hit the APIs to fetch the info. For instance, we will filter out Order information by product identify by calling this REST API cURL:

curl -i -X GET 
 'http://localhost:8083/api/orders?productName=Awesomepercent20Leatherpercent20Lamp'

Nevertheless, we will go one step additional and construct a easy static UI web page to listing all orders and search them from an enormous listing in a lightning pace with the assistance of above API. We’re going to use Thymeleaf to construct a fast UI in Spring Boot. In case you are new to this and need to study extra about Thymeleaf, then you may observe this tutorial. We will even use JQuery instruments and Bootstrap parts for our UI implementation.

So let’s add a number of the libraries that we want as a part of our current pom.xml:


		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-thymeleaf</artifactId>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>jquery</artifactId>
			<model>3.5.0</model>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>jQuery-Autocomplete</artifactId>
			<model>1.4.10</model>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>bootstrap</artifactId>
			<model>4.4.1</model>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>jquery-ui</artifactId>
			<model>1.12.1</model>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>bootstrap-table</artifactId>
			<model>1.15.5</model>
		</dependency>

Then we are going to outline a controller to route the UI endpoints to a static HTML web page:

@Controller
public class UIController {

    @Autowired
    OrderRepository orderRepository;

    @GetMapping("/")
    public String displayHome(Mannequin mannequin) {
        return "fundamental/residence";
    }

    @GetMapping("/orders/all")
    public String displayCustomers(Mannequin mannequin) {
        mannequin.addAttribute("orders", orderRepository.findAll());
        return "orders/list-orders";
    }
}

Now we have to outline a Residence web page to offer a quick intro in regards to the web site after which we’ve got so as to add a menu merchandise to listing all orders as a part of Navigation bar to path to that order itemizing web page.

So let’s begin with the HTML implementation. We have to outline all our HTML pages as a part of srcmainresourcestemplates folder. First we are going to implement the Residence web page as a part of residence.html:

<!DOCTYPE html>

<html xmlns:th="http://www.thymeleaf.org">

    <head th:exchange="layouts :: header"></head>

    <physique class="bg-dark mb-4">
        <nav th:exchange="layouts :: navbar"></nav>
        <div th:fashion = "'colour:white'">
            <h3 fashion="text-align:heart">Welcome to E-commerce App !!</h3>
            <p fashion="text-align:heart">Please click on on Listing Orders button to see the entire listing!</p>
        </div>
    </physique>

</html>

Subsequent we have to outline a navbar to listing the menu gadgets as a part of layouts.html which then has been outlined with th:exchange within the above residence.html:

<!DOCTYPE html>

<html xmlns:th="http://www.thymeleaf.org">

<head th:fragment="header">
    <!-- Required meta tags -->
    <meta charset="utf-8">
    <meta identify="viewport" content material="width=device-width, initial-scale=1, shrink-to-fit=no">
    <hyperlink rel="icon" sort="picture/icon" th:href="@{/img/favicon.ico}"/>
    <title>E-commerce App</title>

    <!-- Bootstrap CSS -->
    <hyperlink rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"
          integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="nameless">
    <hyperlink rel="stylesheet" sort="textual content/css" href="webjars/bootstrap/4.4.1/css/bootstrap.css"/>

    <hyperlink rel="stylesheet" href="webjars/bootstrap-table/1.15.5/bootstrap-table.css">

    <hyperlink rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css">

    <hyperlink rel="stylesheet" href="webjars/jquery-ui/1.12.1/jquery-ui.css">
    <fashion>
        .table-two-color {
            colour: #e2e2e2;
            background-color: #2b4a47;
        }
    </fashion>
</head>

<nav th:fragment="navbar" class="navbar navbar-expand-lg navbar-dark bg-primary">
    <a class="navbar-brand" href="#">E-commerce App</a>
    <button class="navbar-toggler" sort="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
        <span class="navbar-toggler-icon"></span>
    </button>

    <div class="collapse navbar-collapse" id="navbarSupportedContent">
        <ul class="navbar-nav mr-auto nav-pills justify-content-center">
            <li class="nav-item">
                <a class="nav-link energetic" th:href="@{/}">Residence <span class="sr-only">(present)</span></a>
            </li>
            <li class="nav-item">
                <a class="nav-link energetic" th:href="@{/orders/all}">Listing Orders</a>
            </li>
        </ul>
    </div>
</nav>

</html>

Subsequent we are going to implement the Order itemizing web page with a desk and a search bar together with a search button as a part of list-orders.html:

<!DOCTYPE html>

<html xmlns:th="http://www.thymeleaf.org">

<head th:exchange="layouts :: header"></head>

<physique class="bg-dark mb-4">

<nav th:exchange="layouts :: navbar"
     class="navbar navbar-expand-md navbar-dark bg-dark mb-4"></nav>


<div th:fashion="'colour:white'">
    <h3 fashion="text-align: heart">Order Listing</h3>
    <hr>
    <div class="starter-template">
        <kind class="form-horizontal" id="search-form">
            <div class="form-row align-items-center">
                <div class="col-sm-10">
                    <enter sort="textual content" class="form-control" id="time period"
                           placeholder="Search Orders by Product Title">
                </div>
                <div class="col-auto">
                    <button sort="submit" id="bth-search"
                            class="btn btn-outline-success btn-sm mb-3">Search</button>
                </div>
            </div>
        </kind>
    </div>
    <div id="suggestions" class="bg-info">
        <desk class="desk table-striped table-dark">
            <thead class="thead-dark">
            <tr>
                <th>ID</th>
                <th>Product Title</th>
                <th>Product ID</th>
                <th>Product Sort</th>
                <th>Product Rely</th>
                <th>Itemizing Date</th>
                <th>Buyer ID</th>
                <th>Buyer E mail</th>
                <th>Buyer Title</th>
                <th>Buyer Cellular</th>
                <th>Transport Tackle</th>
                <th>Transport Pincode</th>
                <th>Standing</th>
                <th>Value</th>
                <th>Weight</th>
            </tr>
            </thead>
            <tbody th:fashion="'colour:white'">
            <tr th:every="aOrder: ${orders}">
                <td th:textual content="@{${aOrder.id}}"></td>
                <td th:textual content="${aOrder.productName}"></td>
                <td th:textual content="${aOrder.productId}"></td>
                <td th:textual content="${aOrder.productType}"></td>
                <td th:textual content="${aOrder.productCount}"></td>
                <td th:textual content="${aOrder.listingDate}"></td>
                <td th:textual content="${aOrder.customerId}"></td>
                <td th:textual content="${aOrder.customerEmail}"></td>
                <td th:textual content="${aOrder.customerName}"></td>
                <td th:textual content="${aOrder.customerMobile}"></td>
                <td th:textual content="${aOrder.shippingAddress}"></td>
                <td th:textual content="${aOrder.shippingPincode}"></td>
                <td th:textual content="${aOrder.standing}"></td>
                <td th:textual content="${aOrder.value}"></td>
                <td th:textual content="${aOrder.weight}"></td>
            </tr>
            </tbody>
        </desk>
    </div>
</div>

<script sort="textual content/javascript" src="/webjars/jquery/3.5.0/jquery.js"></script>
<script sort="textual content/javascript"
        src="/webjars/jquery/3.5.0/jquery.min.js"></script>
<script sort="textual content/javascript"
        src="/webjars/jQuery-Autocomplete/1.4.10/jquery.autocomplete.js"></script>
<script sort="textual content/javascript"
        src="/webjars/jQuery-Autocomplete/1.4.10/jquery.autocomplete.min.js"></script>
<script sort="textual content/javascript" src="/js/fundamental.js"></script>
</physique>

</html>

Lastly, we have to outline a Javascript implementation to make AJAX calls to load the info from API and exchange the content material in UI:

$(doc).prepared(operate () {

    $("#search-form").submit(operate (occasion) {
        occasion.preventDefault();
        search_submit();
    });
});


operate counsel(suggestion) {
    search_submit()
}


operate search_submit() {
	var t0 = efficiency.now();
    var search = "productName=" + $("#time period").val();

    $("#btn-search").prop("disabled", true);

    console.log("Search time period: {}", $("#time period").val());
    $.ajax({
        sort: "GET",
        contentType: "software/json",
        url: "/api/orders",
        information: search,
        dataType: 'json',
        cache: false,
        timeout: 600000,
        success: operate (information) {
            $('#suggestions').empty();
            var desk = [];
            console.log("Information returned: {}", information);
            if (information.size > 0) {
            	var whole = information.size;
            	var t1 = efficiency.now();
            	var diff = ((t1 - t0)/1000).toFixed(2);
            	$('#suggestions').append('Returned ' + whole + ' data in ' + diff + ' seconds!');
                var gadgets = information;
                for (var i = 0; gadgets.size > i; i++) {
                    var merchandise = gadgets[i];
                    desk.push([item.id,
                     item.productName,
                     item.productId,
                     item.productType,
                     item.productCount,
                     item.listingDate,
                     item.customerId,
                     item.customerName,
                     item.customerEmail,
                     item.customerMobile,
                     item.shippingAddress,
                     item.shippingPincode,
                     item.status,
                     item.price,
                     item.weight])
                }
                makeTable($('#suggestions'), desk);
                $("#btn-search").prop("disabled", false);
            } else {
                $('#suggestions').append('No Information Discovered');
            }
        },
        error: operate (e) {
            $('#suggestions').empty();
            var json = "<h4>Search Error Response</h4><pre>"
                + e.responseText + "</pre>";
            $('#suggestions').html(json);

            console.log("ERROR : ", e);
            $("#btn-search").prop("disabled", false);

        }
    });

    operate makeTable(container, information) {
        var desk = $("<desk/>").addClass('desk table-striped table-dark');
        var headers = ["ID",
            "Product Name",
            "Product ID",
            "Product Type",
            "Product Count",
            "Listing Date",
            "Customer ID",
            "Customer Email",
            "Customer Name",
            "Customer Mobile",
            "Shipping Address",
            "Shipping Pincode",
            "Status",
            "Price",
            "Weight"];
        var thead = $("<thead/>").addClass('thead-dark');
        var heads = $("<tr/>");
        $.every(headers, operate (colIndex, c) {
            heads.append($("<th/>").textual content(c));
        });
        var tbody = $("<tbody/>");
        desk.append(thead.append(heads));

        $.every(information, operate (rowIndex, r) {
            var row = $("<tr/>");
            $.every(r, operate (colIndex, c) {
            	row.append($("<td/>").textual content(c));
            });
            tbody.append(row);
        });
        desk.append(tbody);
        return container.append(desk);
    }


}

Now we will run the code and check out accessing the house web page by calling, http://localhost:8083/:

UI Home Page

If we click on on “Listing Orders” button, then it’ll listing all the present orders in a tabular format:

Order List Table

Now, if we search any order by the identify of the product in search bar, it’ll return the consequence as beneath:

Order Search Result

All of the code for this part will be present in GitHub.

Extract from Kafka, Remodel to/from Avro and Load into Cassandra Database in Spring Cloud Stream utilizing Reactive Cassandra

Within the last use-case, we are going to attempt to construct the identical “Extract, Load and Remodel” utilizing Spring Cloud Stream. We will even use Spring Cloud Stream Schema to register and use Schema Registry server in purposeful mode of coding. Then we will use Spring Cloud Stream for Kafka to load information from Kafka. We will even use Reactive Cassandra to assist reactive type of information ingestion in Cassandra. Lastly, we are going to use Spring Reactive Internet the place we are going to make use of Flux or Mono to outline our REST endpoints and its logic.

So let’s begin our implementation and configure our pom.xml with all of the required dependencies:

	<properties>
		<java.model>1.8</java.model>
		<spring-cloud.model>2021.0.4</spring-cloud.model>
		<kafka-avro-serializer.model>5.3.0</kafka-avro-serializer.model>
		<avro.model>1.11.1</avro.model>
	</properties>
	<repositories>
		<repository>
			<id>confluent</id>
			<url>https://packages.confluent.io/maven/</url>
		</repository>
	</repositories>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-cassandra-reactive</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-webflux</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-stream-binder-kafka</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-stream-schema</artifactId>
			<model>2.2.1.RELEASE</model>
		</dependency>
		<dependency>
			<groupId>io.confluent</groupId>
			<artifactId>kafka-avro-serializer</artifactId>
			<model>${kafka-avro-serializer.model}</model>
		</dependency>
		<dependency>
			<groupId>org.apache.avro</groupId>
			<artifactId>avro-compiler</artifactId>
			<model>${avro.model}</model>
		</dependency>
		<dependency>
			<groupId>org.apache.avro</groupId>
			<artifactId>avro-maven-plugin</artifactId>
			<model>${avro.model}</model>
		</dependency>
		<dependency>
			<groupId>org.apache.avro</groupId>
			<artifactId>avro</artifactId>
			<model>${avro.model}</model>
			<scope>compile</scope>
		</dependency>
		<dependency>
			<groupId>org.projectlombok</groupId>
			<artifactId>lombok</artifactId>
			<non-compulsory>true</non-compulsory>
		</dependency>
		<dependency>
			<groupId>org.yaml</groupId>
			<artifactId>snakeyaml</artifactId>
			<model>1.31</model>
		</dependency>
		<dependency>
			<groupId>com.github.javafaker</groupId>
			<artifactId>javafaker</artifactId>
			<model>1.0.2</model>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>check</scope>
		</dependency>
		<dependency>
			<groupId>io.projectreactor</groupId>
			<artifactId>reactor-test</artifactId>
			<scope>check</scope>
		</dependency>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-stream</artifactId>
			<scope>check</scope>
			<classifier>test-binder</classifier>
			<sort>test-jar</sort>
		</dependency>
	</dependencies>
	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>org.springframework.cloud</groupId>
				<artifactId>spring-cloud-dependencies</artifactId>
				<model>${spring-cloud.model}</model>
				<sort>pom</sort>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

	<construct>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
				<configuration>
					<excludes>
						<exclude>
							<groupId>org.projectlombok</groupId>
							<artifactId>lombok</artifactId>
						</exclude>
					</excludes>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-resources-plugin</artifactId>
				<model>3.1.0</model>
			</plugin>
			<plugin>
				<groupId>org.codehaus.mojo</groupId>
				<artifactId>build-helper-maven-plugin</artifactId>
				<model>3.1.0</model>
				<executions>
					<execution>
						<section>generate-sources</section>
						<targets>
							<objective>add-source</objective>
						</targets>
						<configuration>
							<sources>
								<supply>${undertaking.construct.listing}/generated-sources/avro</supply>
							</sources>
						</configuration>
					</execution>
				</executions>
			</plugin>
			<plugin>
				<groupId>org.apache.avro</groupId>
				<artifactId>avro-maven-plugin</artifactId>
				<model>${avro.model}</model>
				<executions>
					<execution>
						<id>schemas</id>
						<section>generate-sources</section>
						<targets>
							<objective>schema</objective>
							<objective>protocol</objective>
							<objective>idl-protocol</objective>
						</targets>
						<configuration>
							<sourceDirectory>${undertaking.basedir}/src/fundamental/assets/</sourceDirectory>
							<outputDirectory>${undertaking.basedir}/src/fundamental/java/</outputDirectory>
						</configuration>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</construct>

As we will discover, we’ve got additionally outlined Avro dependencies and its package deal from which these libraries might be downloaded. Lastly, we’ve got outlined the construct configurations that may create the schema within the given package deal or location.

On this instance, we are going to mess around identical information with respect to Clients as a part of e-commerce software that we outlined above. Similar as above, we will even have yet another schema that may maintain the Composite Key idea of Buyer in order that we will publish distinctive data with this key in Kafka matter. Let’s outline our schema information:

{
  "sort": "file",
  "identify": "Buyer",
  "namespace": "com.stackabuse.kafka_spring_cloud_reactive_cassandra.schema",
  "fields": [
    {
      "name": "id",
      "type": "int",
      "doc": "Id of the customer"
    },
    {
      "name": "firstName",
      "type": "string",
      "doc": "First Name of the customer"
    },
    {
      "name": "lastName",
      "type": "string",
      "doc": "Last Name of the customer"
    },
    {
      "name": "department",
      "type": "string",
      "default": "IT ",
      "doc": "Department of the customer"
    },
    {
      "name": "designation",
      "type": "string",
      "default": "Software Engineer",
      "doc": "Designation of the customer"
    }
  ]
}
{
  "sort": "file",
  "identify": "CustomerKey",
  "namespace": "com.stackabuse.kafka_spring_cloud_reactive_cassandra.schema",
  "fields": [
    {
      "name": "id",
      "type": "int",
      "doc": "Id of the customer"
    },
    {
      "name": "departmentName",
      "type": "string",
      "doc": "Department of the customer"
    }
  ]
}

Now, we are going to outline maven command to generate the schema class:

mvn generate-sources

This may create the POJOs below schema package deal.

Subsequent, we are going to outline our Cassandra entity for Buyer in order that we will remodel the Avro information to desk content material:

@Desk
@Information
@Builder
@AllArgsConstructor
@NoArgsConstructor
public class Buyer {

    @PrimaryKeyColumn(
            ordinal = 2,
            sort = PrimaryKeyType.CLUSTERED,
            ordering = Ordering.DESCENDING)
    personal int id;

    @Column(worth = "first_name")
    personal String firstName;

    @Column(worth = "last_name")
    personal String lastName;

    @PrimaryKeyColumn(
            ordinal = 0,
            sort = PrimaryKeyType.PARTITIONED)
    personal String division;
    personal String designation;
}

We’ll partition our information retailer utilizing division discipline in our desk. We’re additionally utilizing @Desk annotation in order that the bean is straight mapped to a Cassandra information desk. In contrast to @Id in JPA, we’ve got used @PrimaryKeyColumn in Cassandra.

Now, we are going to outline the Repository interface to create the CassandraRepository occasion:

public interface CustomerRepository extends ReactiveCassandraRepository<Buyer, Integer> {

    @AllowFiltering
    Flux<Buyer> findByDepartment(String division);
}

Now, we are going to outline the configuration to load Confluent Schema Registry Endpoint into our code:

@Configuration
public class SchemaRegistryConfig {

    @Bean
    public SchemaRegistryClient schemaRegistryClient(
            @Worth("${spring.cloud.stream.kafka.binder.producer-properties.schema.registry.url}")
                    String endPoint) {
        ConfluentSchemaRegistryClient consumer = new ConfluentSchemaRegistryClient();
        consumer.setEndpoint(endPoint);
        return consumer;
    }
}

Subsequent, we are going to outline a operate to provide our messages into Kafka matter. This may generate some pretend data utilizing Avro schema:

@Slf4j
@Service
public class ProducerService {

    @Autowired
    personal StreamBridge streamBridge;

    @Scheduled(cron = "*/10 * * * * *")
    public void producer() {

        for (int i = 0; i < 10; i++) {
            String division = Faker.occasion().commerce().division();
            Buyer buyer = Buyer.newBuilder()
                    .setId(i)
                    .setFirstName(Faker.occasion().mates().character())
                    .setLastName(Faker.occasion().gameOfThrones().character())
                    .setDepartment(division)
                    .setDesignation(Faker.occasion().firm().career())
                    .construct();

            CustomerKey customerKey = CustomerKey.newBuilder()
                    .setId(i)
                    .setDepartmentName(division)
                    .construct();
            log.information("Producing message: {}", buyer);
            streamBridge.ship("producer-out-0", MessageBuilder.withPayload(buyer)
                    .setHeader(KafkaHeaders.MESSAGE_KEY, customerKey)
                    .construct());
        }
    }
}

We’re utilizing StreamBridge to publish the messages into producer-out-0 channel. This may publish Buyer information in our matter in Avro format which we will devour and course of for our downstream software. Subsequent, we will outline a easy Client service that may devour this messages from Kafka, deserialize the Avro data to our entity sort and at last push it into the Cassandra:

@Slf4j
@Service
public class ConsumerService {

    @Autowired
    personal CustomerRepository customerRepository;

    @Bean
    public Client<Buyer> client() {
        return buyer -> {
            log.information("Consumed buyer particulars: {}", buyer);
            com.stackabuse.kafka_spring_cloud_reactive_cassandra.entity.Buyer customerEntity =
                    com.stackabuse.kafka_spring_cloud_reactive_cassandra.entity.Buyer
                            .builder()
                            .id(buyer.getId())
                            .firstName(buyer.getFirstName().toString())
                            .lastName(buyer.getLastName().toString())
                            .division(buyer.getDepartment().toString())
                            .designation(buyer.getDesignation().toString())
                            .construct();
            Mono<com.stackabuse.kafka_spring_cloud_reactive_cassandra.entity.Buyer> savedCustomer =
                    customerRepository.save(customerEntity);
            savedCustomer
                    .log()
                    .subscribe();
        };
    }
}

Now, we are going to outline a Controller to load this information from Cassandra and present it as a part of REST API. So, we are going to create a Controller and outline three strategies: getAllOrders(), getOrderById() and createOrder(). We’ll use Mono and Flux to retrieve information from REST endpoints:

@RestController
@RequestMapping("/api/prospects")
public class CustomerController {

    @Autowired
    CustomerRepository customerRepository;

    @GetMapping
    public Flux<Buyer> getAllCustomers(@RequestParam(required = false) String division) {
        if (Objects.isNull(division)) {
            return customerRepository.findAll();
        } else {
            return customerRepository.findByDepartment(division);
        }
    }

    @GetMapping("/{id}")
    public Mono<Buyer> getCustomerById(@PathVariable("id") int id) {
        return customerRepository.findById(id);
    }

    @PostMapping
    public Mono<Buyer> saveCustomer(@RequestBody Buyer buyer) {
        return customerRepository.save(buyer);
    }
}

Lastly, we are going to outline the properties for Kafka, Confluent Schema Registry and Cassandra as a part of software.yaml:

spring:
  fundamental:
    allow-bean-definition-overriding: true
  information:
    cassandra:
      keyspace-name: stackabuse
      contact-points:
        - localhost
      local-datacenter: datacenter1
      port: 9042
      schema-action: CREATE_IF_NOT_EXISTS
      username: admin
      password: password
      connection:
        connect-timeout: 60000ms
        read-timeout: 60000ms
      pool:
        pool-timeout: 60000ms
  cloud:
    operate:
      definition: producer;client;
    stream:
      bindings:
        consumer-in-0:
          client:
            use-native-decoding: true # Permits utilizing the customized deserializer
            max-attempts: 3
            back-off-initial-interval: 100
          vacation spot: cassandra-customer-details
          content-type: software/*+avro
          group: group-customer
          concurrency: 3
        producer-out-0:
          vacation spot: cassandra-customer-details
          content-type: software/*+avro
          producer:
            useNativeEncoding: true # Permits utilizing the customized serializer
      kafka:
        binder:
          brokers: localhost:9092
          autoCreateTopics: true
          autoAddPartitions: true
          replication-factor: 1
          configuration:
            processing.assure: exactly_once
            isolation.stage: read_committed
            commit.interval.ms: 1000
          producer-properties:
            key.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
            worth.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
            schema.registry.url: http://localhost:8081
            acks: all
            allow.idempotence: true
          consumer-properties:
            key.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
            worth.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
            schema.registry.url: http://localhost:8081
            particular.avro.reader: true
            enable.auto.create.matters: true

This marks our “Extract, Remodel and Load” pipeline full. Now we will construct our software and run it. This may first publish some random data earlier than beginning our software, then mechanically ingest the info from Kafka and remodel it to outlined Cassandra desk entities and push it into database; all utilizing Spring Cloud Stream. We will additionally hit the APIs to fetch the info. For instance, we will filter out Buyer information by product identify by calling this REST API cURL:

curl -i -X GET 
 'http://localhost:8083/api/prospects?division=IT'

Nevertheless, as we did in our earlier use-case, we will go one step additional and construct a easy static UI web page to listing all prospects and search them from an enormous listing in a lightning pace with the assistance of above API. We’ll use Thymeleaf, JQuery instruments and Bootstrap parts for our UI implementation.

So let’s add a number of the libraries that we want as a part of our current pom.xml:

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-thymeleaf</artifactId>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>jquery</artifactId>
			<model>3.5.0</model>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>jQuery-Autocomplete</artifactId>
			<model>1.4.10</model>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>bootstrap</artifactId>
			<model>4.4.1</model>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>jquery-ui</artifactId>
			<model>1.12.1</model>
		</dependency>
		<dependency>
			<groupId>org.webjars</groupId>
			<artifactId>bootstrap-table</artifactId>
			<model>1.15.5</model>
		</dependency>

Then we are going to outline a controller to route the UI endpoints to a static HTML web page:

@Controller
public class UIController {

    @Autowired
    CustomerRepository customerRepository;

    @GetMapping("/")
    public String displayHome(Mannequin mannequin) {
        return "fundamental/residence";
    }

    @GetMapping("/prospects/all")
    public String displayCustomers(Mannequin mannequin) {
        mannequin.addAttribute("prospects", customerRepository.findAll().collectList().share().block());
        return "prospects/list-customers";
    }
}

Now we have to outline a Residence web page to offer a quick intro in regards to the web site after which we’ve got so as to add a menu merchandise to listing all prospects as a part of Navigation bar to path to that buyer itemizing web page.

So let’s begin with the HTML implementation. We have to outline all our HTML pages as a part of srcmainresourcestemplates folder. First we are going to implement the Residence web page as a part of residence.html:

<!DOCTYPE html>

<html xmlns:th="http://www.thymeleaf.org">

<head th:exchange="layouts :: header"></head>

<physique class="bg-dark mb-4">

<nav th:exchange="layouts :: navbar"></nav>

<div th:fashion = "'colour:white'">
    <h3 fashion="text-align:heart">Welcome to E-commerce App !!</h3>
    <p fashion="text-align:heart">Please click on on Listing Clients button to see the entire listing!</p>
</div>

</physique>

</html>

Subsequent we have to outline a navbar to listing the menu gadgets as a part of layouts.html which then has been outlined with th:exchange within the above residence.html:

<!DOCTYPE html>

<html xmlns:th="http://www.thymeleaf.org">

<head th:fragment="header">
    <!-- Required meta tags -->
    <meta charset="utf-8">
    <meta identify="viewport" content material="width=device-width, initial-scale=1, shrink-to-fit=no">
    <hyperlink rel="icon" sort="picture/icon" th:href="@{/img/favicon.ico}"/>
    <title>E-commerce App</title>

    <!-- Bootstrap CSS -->
    <hyperlink rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css"
          integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="nameless">
    <hyperlink rel="stylesheet" sort="textual content/css" href="webjars/bootstrap/4.4.1/css/bootstrap.css"/>

    <hyperlink rel="stylesheet" href="webjars/bootstrap-table/1.15.5/bootstrap-table.css">

    <hyperlink rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css">

    <hyperlink rel="stylesheet" href="webjars/jquery-ui/1.12.1/jquery-ui.css">
    <fashion>
        .table-two-color {
            colour: #e2e2e2;
            background-color: #2b4a47;
        }
    </fashion>
</head>

<nav th:fragment="navbar" class="navbar navbar-expand-lg navbar-dark bg-primary">
    <a class="navbar-brand" href="#">E-commerce App</a>
    <button class="navbar-toggler" sort="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
        <span class="navbar-toggler-icon"></span>
    </button>

    <div class="collapse navbar-collapse" id="navbarSupportedContent">
        <ul class="navbar-nav mr-auto nav-pills justify-content-center">
            <li class="nav-item">
                <a class="nav-link energetic" th:href="@{/}">Residence <span class="sr-only">(present)</span></a>
            </li>
            <li class="nav-item">
                <a class="nav-link energetic" th:href="@{/prospects/all}">Listing prospects</a>
            </li>
        </ul>
    </div>
</nav>

</html>

Subsequent we are going to implement the Buyer itemizing web page with a desk and a search bar together with a search button as a part of list-customers.html:

<!DOCTYPE html>

<html xmlns:th="http://www.thymeleaf.org">

<head th:exchange="layouts :: header"></head>

<physique class="bg-dark mb-4">

<nav th:exchange="layouts :: navbar"
     class="navbar navbar-expand-md navbar-dark bg-dark mb-4"></nav>


<div th:fashion="'colour:white'">
    <h3 fashion="text-align: heart">Buyer Profile Listing</h3>
    <hr>
    <div class="starter-template">
        <kind class="form-horizontal" id="search-form">
            <div class="form-row align-items-center">
                <div class="col-sm-10">
                    <enter sort="textual content" class="form-control" id="time period"
                           placeholder="Search Clients by Division">
                </div>
                <div class="col-auto">
                    <button sort="submit" id="bth-search"
                            class="btn btn-outline-success btn-sm mb-3">Search</button>
                </div>
            </div>
        </kind>
    </div>
    <div id="suggestions" class="bg-info">
        <desk class="desk table-striped table-dark">
            <thead class="thead-dark">
            <tr>
                <th>ID</th>
                <th>First Title</th>
                <th>Final Title</th>
                <th>Division</th>
                <th>Designation</th>
            </tr>
            </thead>
            <tbody th:fashion="'colour:white'">
            <tr th:every="aCustomer: ${prospects}">
                <td th:textual content="@{${aCustomer.id}}"></td>
                <td th:textual content="${aCustomer.firstName}"></td>
                <td th:textual content="${aCustomer.lastName}"></td>
                <td th:textual content="${aCustomer.division}"></td>
                <td th:textual content="${aCustomer.designation}"></td>
            </tr>
            </tbody>
        </desk>
    </div>
</div>

<script sort="textual content/javascript" src="/webjars/jquery/3.5.0/jquery.js"></script>
<script sort="textual content/javascript"
        src="/webjars/jquery/3.5.0/jquery.min.js"></script>
<script sort="textual content/javascript"
        src="/webjars/jQuery-Autocomplete/1.4.10/jquery.autocomplete.js"></script>
<script sort="textual content/javascript"
        src="/webjars/jQuery-Autocomplete/1.4.10/jquery.autocomplete.min.js"></script>
<script sort="textual content/javascript" src="/js/fundamental.js"></script>
</physique>

</html>

Lastly, we have to outline a Javascript implementation to make AJAX calls to load the info from API and exchange the content material in UI:

$(doc).prepared(operate () {

    $("#search-form").submit(operate (occasion) {
        occasion.preventDefault();
        search_submit();
    });
});


operate counsel(suggestion) {
    search_submit()
}


operate search_submit() {
	var t0 = efficiency.now();
    var search = "division=" + $("#time period").val();
    search["department"] = $("#time period").val();

    $("#btn-search").prop("disabled", true);

    console.log("Search time period: {}", $("#time period").val());
    $.ajax({
        sort: "GET",
        contentType: "software/json",
        url: "/api/prospects",
        information: search,
        dataType: 'json',
        cache: false,
        timeout: 600000,
        success: operate (information) {
            $('#suggestions').empty();
            var desk = [];
            console.log("Information returned: {}", information);
            if (information.size > 0) {
            	var whole = information.size;
            	var t1 = efficiency.now();
            	var diff = ((t1 - t0)/1000).toFixed(2);
            	$('#suggestions').append('Returned ' + whole + ' data in ' + diff + ' seconds!');
                console.log("Information returned: {}", information);
                var gadgets = information;
                for (var i = 0; gadgets.size > i; i++) {
                    var merchandise = gadgets[i];
                    desk.push([item.id, item.firstName, item.lastName, item.department, item.designation])
                }
                makeTable($('#suggestions'), desk);
                $("#btn-search").prop("disabled", false);
            } else {
                $('#suggestions').append('No Information Discovered');
            }
        },
        error: operate (e) {
            $('#suggestions').empty();
            var json = "<h4>Search Error Response</h4><pre>"
                + e.responseText + "</pre>";
            $('#suggestions').html(json);

            console.log("ERROR : ", e);
            $("#btn-search").prop("disabled", false);

        }
    });

    operate makeTable(container, information) {
        var desk = $("<desk/>").addClass('desk table-striped table-dark');
        var headers = ["ID", "First Name", "Last Name", "Department", "Designation"];
        var thead = $("<thead/>").addClass('thead-dark');
        var heads = $("<tr/>");
        $.every(headers, operate (colIndex, c) {
            heads.append($("<th/>").textual content(c));
        });
        var tbody = $("<tbody/>");
        desk.append(thead.append(heads));

        $.every(information, operate (rowIndex, r) {
            var row = $("<tr/>");
            $.every(r, operate (colIndex, c) {
            	row.append($("<td/>").textual content(c));
            });
            tbody.append(row);
        });
        desk.append(tbody);
        return container.append(desk);
    }


}

Now we will run the code and check out accessing the house web page by calling, http://localhost:8080/:

UI Home Page

If we click on on “Listing Clients” button, then it’ll listing all the present prospects in a tabular format:

Customer List Tab

Now, if we search any buyer by division in search bar, it’ll return the consequence as beneath:

Customer Search Result

With this, we’ve got accomplished our Guided Challenge. We seemed into varied information pipeline methods utilizing Spring Boot and Kafka. You could find the general code for this part in GitHub.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments