YaBeSH Engineering and Technology Library

    • Journals
    • PaperQuest
    • YSE Standards
    • YaBeSH
    • Login
    View Item 
    •   YE&T Library
    • ASME
    • Journal of Electronic Packaging
    • View Item
    •   YE&T Library
    • ASME
    • Journal of Electronic Packaging
    • View Item
    • All Fields
    • Source Title
    • Year
    • Publisher
    • Title
    • Subject
    • Author
    • DOI
    • ISBN
    Advanced Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Archive

    Enhancing Image Segmentation Model in Computing Void Percentage With Mask RCNN

    Source: Journal of Electronic Packaging:;2025:;volume( 147 ):;issue: 003::page 31001-1
    Author:
    Ling, Calvin
    ,
    Abas, Aizat
    ,
    Kai, Chew Cheng
    ,
    Azahari, Muhammad Taufik
    DOI: 10.1115/1.4067897
    Publisher: The American Society of Mechanical Engineers (ASME)
    Abstract: Automating quality control has been an ongoing effort, especially when manufacturing large flip-chips. One such method is through employing the convolutional neural network (CNN). This work studied the optimum setup of the Mask region-based convolutional neural network (RCNN) model’s accuracy in analyzing asymmetrical, large-amount ball grid array (BGA) flip-chip underfilling void formation and its size relative to the underfill region. Experimental images of the through-scan acoustic microscope (TSAM) of BGA underfill are collected, preprocessed, and trained with the Mask RCNN model by tweaking its backbone architecture and hyperparameter. Extraction of the detected region size is computed with a histogram. Otsu’s thresholding method and the model’s performance in generating the results with its accuracy relative to real-scale images are evaluated based on the customization done with the CNN and thresholding model. The Mask RCNN-ResNet101-FPN-Custom with Otsu’s thresholding method yields the best-performing result in both capturing void(s) in TSAM images up to 96.40% in accuracy and computing the void percentage relative to the underfilling region with a low percentage error of 1.70%. The study provides insight to improve further capturing and computing void presence and size, allowing manufacturers to leverage optimized CNN architecture and image segmentation thresholding algorithm to expedite automated quality checking in a manufacturing process, reducing lead cost.
    • Download: (1.332Mb)
    • Show Full MetaData Hide Full MetaData
    • Get RIS
    • Item Order
    • Go To Publisher
    • Price: 5000 Rial
    • Statistics

      Enhancing Image Segmentation Model in Computing Void Percentage With Mask RCNN

    URI
    http://yetl.yabesh.ir/yetl1/handle/yetl/4308110
    Collections
    • Journal of Electronic Packaging

    Show full item record

    contributor authorLing, Calvin
    contributor authorAbas, Aizat
    contributor authorKai, Chew Cheng
    contributor authorAzahari, Muhammad Taufik
    date accessioned2025-08-20T09:20:14Z
    date available2025-08-20T09:20:14Z
    date copyright3/14/2025 12:00:00 AM
    date issued2025
    identifier issn1043-7398
    identifier otherep_147_03_031001.pdf
    identifier urihttp://yetl.yabesh.ir/yetl1/handle/yetl/4308110
    description abstractAutomating quality control has been an ongoing effort, especially when manufacturing large flip-chips. One such method is through employing the convolutional neural network (CNN). This work studied the optimum setup of the Mask region-based convolutional neural network (RCNN) model’s accuracy in analyzing asymmetrical, large-amount ball grid array (BGA) flip-chip underfilling void formation and its size relative to the underfill region. Experimental images of the through-scan acoustic microscope (TSAM) of BGA underfill are collected, preprocessed, and trained with the Mask RCNN model by tweaking its backbone architecture and hyperparameter. Extraction of the detected region size is computed with a histogram. Otsu’s thresholding method and the model’s performance in generating the results with its accuracy relative to real-scale images are evaluated based on the customization done with the CNN and thresholding model. The Mask RCNN-ResNet101-FPN-Custom with Otsu’s thresholding method yields the best-performing result in both capturing void(s) in TSAM images up to 96.40% in accuracy and computing the void percentage relative to the underfilling region with a low percentage error of 1.70%. The study provides insight to improve further capturing and computing void presence and size, allowing manufacturers to leverage optimized CNN architecture and image segmentation thresholding algorithm to expedite automated quality checking in a manufacturing process, reducing lead cost.
    publisherThe American Society of Mechanical Engineers (ASME)
    titleEnhancing Image Segmentation Model in Computing Void Percentage With Mask RCNN
    typeJournal Paper
    journal volume147
    journal issue3
    journal titleJournal of Electronic Packaging
    identifier doi10.1115/1.4067897
    journal fristpage31001-1
    journal lastpage31001-7
    page7
    treeJournal of Electronic Packaging:;2025:;volume( 147 ):;issue: 003
    contenttypeFulltext
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian
     
    DSpace software copyright © 2002-2015  DuraSpace
    نرم افزار کتابخانه دیجیتال "دی اسپیس" فارسی شده توسط یابش برای کتابخانه های ایرانی | تماس با یابش
    yabeshDSpacePersian