Page 41 - IJAMD-1-2
P. 41
International Journal of AI for
Materials and Design
AI-driven quality assurance in AM
potential of AI-driven quality assurance can be realized. initiatives focused on AI-driven quality assurance,
37
This balanced approach ensures that AI applications in ensuring alignment with organizational goals and
AM are both effective and reliable, paving the way for more priorities.
advanced and integrated manufacturing solutions. The (ii) Investment in talent and expertise: Organizations
mitigation strategies for each of the cases are elaborately need to invest in acquiring and developing talent with
explained in Table 5. expertise in AI, ML, computer vision, and AM. 21,31,34
Training programs, workshops, and knowledge-
7. Managerial implications sharing sessions should be conducted to upskill
Past studies offer several significant managerial existing workforce members and foster a culture of
implications for organizations involved in AM. These innovation and continuous learning.
implications encompass strategic decisions, operational (iii) Collaboration and partnerships: Collaborative
enhancements, and resource allocations aimed at partnerships with research institutions, universities,
leveraging AI technologies to improve quality assurance and technology providers can accelerate the
in AM processes. 21,22 Below are detailed managerial development and adoption of AI-driven quality
implications derived from the studies: assurance solutions. 2,3,7,9 Organizations should actively
(i) Strategic integration of AI technologies: Organizations engage in industry consortia, standards development
should recognize the strategic importance of AI-driven organizations, and regulatory bodies to shape
quality assurance in AM and incorporate it into their guidelines and best practices for AI-enabled AM.
long-term technology roadmap. Senior management (iv) Integration with existing workflows: Seamless
12
should allocate resources for research and development integration of AI-driven quality assurance tools into
Table 5. Mitigation strategies
Drawbacks Factors Mitigation strategies
Data variability Inconsistent data • Data augmentation: Applying data augmentation techniques, such as rotation, scaling, and noise addition,
quality can help create a more robust training dataset that simulates various conditions.
• Standardization protocols: Establishing standardized protocols for data collection can reduce variability.
Ensuring consistent camera settings, maintaining controlled environmental conditions, and regular
machine calibration can improve data quality.
Domain adaptation • Transfer learning: Using pre‑trained models and fine‑tuning them with data from the target domain can
help adapt models to new environments.
• Domain adaptation techniques: Employing domain adaptation methods, such as domain adversarial
training, can enhance model robustness to variations across different domains.
Model Lack of transparency • XAI: Incorporating XAI techniques, such as saliency maps, Layer‑wise Relevance Propagation, or SHapley
interpretability Additive exPlanations, can provide insights into which parts of the input data contributed to the model’s
predictions.
• Model simplification: Using simpler models or decision trees, where feasible, can enhance interpretability
without significantly compromising performance.
Diagnostic use • Hybrid models: Combining ML models with traditional statistical methods or rule‑based systems can
improve both accuracy and interpretability.
• Feature importance analysis: Analyzing feature importance can help identify which process parameters
most influence defect formation, guiding process optimization efforts.
Implementation Integration with • Modular architecture: Designing AI systems with modularity in mind can facilitate easier integration.
complexities existing systems Using APIs and standardized communication protocols can enhance compatibility.
• Collaborative development: Working closely with machine manufacturers and software providers can help
create more integrated solutions.
Scalability and • Edge computing: Implementing edge computing solutions can offload processing to local devices, reducing
real-time processing latency and dependency on central servers.
• Optimized algorithms: Using optimized ML algorithms and hardware accelerators, such as GPUs or TPUs,
can improve processing speed and scalability.
Maintenance and • Automated retraining pipelines: Setting up automated pipelines for data collection, model training, and
updates deployment can streamline maintenance.
• Continuous monitoring: Implementing continuous monitoring systems to track model performance and
trigger retraining when necessary can ensure sustained model accuracy.
Abbreviations: GPU: Graphics processing unit; ML: Machine learning; TPU: Tensor processing unit, AI: Artificial intelligence; XAI: Explainable
artificial intelligence.
Volume 1 Issue 2 (2024) 35 doi: 10.36922/ijamd.3455

