Bangla Handwritten Character Recognition Using CNN

Sdílet
Vložit
  • čas přidán 14. 08. 2020
  • I have developed a python library named imgclassifier on top of PyTorch which requires only one line of code for classification. Follow the attached notebook entitled imgclassifier_character_recognition for Banga Handwritten Character Recognition using imgclassifier.
    *imgclassifier library: github.com/mehedihasanbijoy/i...
    *imgclassifier_character_recognition: colab.research.google.com/dri...
    *BHCR repository: github.com/mehedihasanbijoy/B...

Komentáře • 34

  • @muhammadasyrafi9550
    @muhammadasyrafi9550 Před 2 lety

    Hi! Thank you for the video. This is very helpful!

  • @amitpatra8605
    @amitpatra8605 Před 2 lety

    Namoskar, sir apni input training labeled dataset ta kotha theke nilen? Kaggle?
    Juktakhor er jonyo ki kore egobo, jodi short e bolen khub valo hoi

  • @mithunchandrasaha403
    @mithunchandrasaha403 Před 3 lety

    vaia nice tutorial tobe ektu explanation korle valo hoto

  • @swarnalibanerjee7354
    @swarnalibanerjee7354 Před 2 lety

    Hi, I'm getting "AttributeError: module 'keras.preprocessing.image' has no attribute 'load_img'" error in the draw_n_guess_the_character() part, how can I resolve it?

  • @manishjha458
    @manishjha458 Před 3 lety

    how can I post an image as input for checking against the model?

  • @kanisfatemashanta9297
    @kanisfatemashanta9297 Před 3 lety

    Have you uploaded the code in github?

  • @sinziMiNishop
    @sinziMiNishop Před 3 lety

    Hi vaiya .your video is amazing. I was wondering that if we work with complex letter then is this code working? If you give mail or any contact information then it will be easier for me . Because I am interested in Real-Time Bangla Handwritten Characters and Digits Recognition using Adopted Convolutional Neural Network this topic.i have lots of questions 😅

    • @mehedihasanbijoy6609
      @mehedihasanbijoy6609  Před 3 lety

      It may requires few modifications in code to connect your dataset to the model. And in case of model, you may try adding some hidden layers, different sized filters, more no. of filters hence different number of filter channels, different stride number, different activation functions, and so on. However, you can ask me remaining questions at mhb6434@gmail.com

  • @nohelnath7840
    @nohelnath7840 Před 9 měsíci

    what is the dataset?? Is the dataset is handwritten or printed??

  • @rupaadhikary517
    @rupaadhikary517 Před rokem

    What if i want to import my own dataset how will i make it

  • @lalitdev1758
    @lalitdev1758 Před 3 lety

    which dataset is suitable for to predicting english handwritten text using this model.

  • @gauravranchi
    @gauravranchi Před 3 lety

    I was trying to find a software that translates doctors prescription...can you help me develop it?

  • @shamshad126prottoy6
    @shamshad126prottoy6 Před rokem

    Do i need GPU for training this model? And is google colab enough for training 50 classes dataset? It will be good to see your reply. thank you

    • @mehedihasanbijoy6609
      @mehedihasanbijoy6609  Před rokem +1

      Yes, colab is enough for training a model on large-scale datasets, but you need to be patient. For instance, it may require hours to complete training for each epoch, so I would suggest saving the model after each epoch.

    • @shamshad126prottoy6
      @shamshad126prottoy6 Před rokem

      @@mehedihasanbijoy6609 ❤️

  • @rupaadhikary517
    @rupaadhikary517 Před rokem

    Where will i get the dataset sir? What is the name of the dataset?

    • @mehedihasanbijoy6609
      @mehedihasanbijoy6609  Před rokem

      drive.google.com/file/d/1dcs1a7Yt9onWlbxrNBOkT7HNC6VWapS2/view?usp=share_link

  •  Před 3 lety

    Can you tell me where from the data is being taken? I mean what is the source link of your dataset?

    • @mehedihasanbijoy6609
      @mehedihasanbijoy6609  Před 3 lety +2

      the name of the dataset is CMATERdb. I collected it from the internet.

    •  Před 3 lety

      @@mehedihasanbijoy6609 Thanks a lot 😊

  • @sadiaafrin7143
    @sadiaafrin7143 Před 2 lety

    What is the name of the dataset? Is it CMATERdb?

  • @prathapt1730
    @prathapt1730 Před 3 lety

    Bro how to this for Kannada language. I got data set in png file

    • @mehedihasanbijoy6609
      @mehedihasanbijoy6609  Před 3 lety

      0. clone the repository and have a look on the format of the dataset, and convert your dataset into this format.
      1. make few modifications in the code(particularly in dataloader) according to your dataset's format.
      you can do it in either way.

    • @prathapt1730
      @prathapt1730 Před 2 lety

      Thank you bro.

  • @tamimahasan336
    @tamimahasan336 Před 2 lety

    What is the solution ?
    classifier.fit_generator(training_set, steps_per_epoch = 12000, epochs = 10,
    Epoch 1/10
    375/12000 [..............................] - ETA: 25:23 - loss: 2.7628 - accuracy: 0.2730WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 120000 batches). You may need to use the repeat() function when building your dataset.
    WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 3000 batches). You may need to use the repeat() function when building your dataset.

  • @kanisfatemashanta9297
    @kanisfatemashanta9297 Před 3 lety

    Have you uploaded the code in github?