-
Research Article
Features, Models, and Applications of Deep Learning in Music Composition
Issue:
Volume 9, Issue 3, September 2025
Pages:
155-162
Received:
20 April 2025
Accepted:
12 June 2025
Published:
15 July 2025
Abstract: Due to the swift advancement of artificial intelligence and deep learning technologies, computers are assuming an increasingly prominent role in the realm of music composition, thereby fueling innovations in techniques for music generation. Deep learning models such as RNNs, LSTMs, Transformers, and diffusion models have demonstrated outstanding performance in the music generation process, effectively handling temporal relationships, long-term dependencies, and complex structural issues in music. Transformers, with their self-attention mechanism, excel at capturing long-term dependencies and generating intricate melodies, while diffusion models exhibit significant advantages in audio quality, producing higher-fidelity and more natural audio. Despite these breakthroughs in generation quality and performance, challenges remain in areas such as efficiency, originality, and structural coherence. This research undertakes a comprehensive examination of the utilization of diverse and prevalent deep learning frameworks in music generation, emphasizing their respective advantages and constraints in managing temporal correlations, prolonged dependencies, and intricate structures. It aims to provide insights to address current challenges in efficiency and control capabilities. Additionally, the research explores the potential applications of these technologies in fields such as music education, therapy, and entertainment, offering theoretical and practical guidance for future music creation and applications. Furthermore, this study highlights the importance of addressing the limitations of current models, such as the computational intensity of Transformers and the slow generation speed of diffusion models, to pave the way for more efficient and creative music generation systems. Future work may focus on combining the strengths of different models to overcome these challenges and to foster greater originality and diversity in AI-generated music. By doing so, we aim to push the boundaries of what is possible in music creation, leveraging the power of AI to inspire new forms of artistic expression and enhance the creative process for musicians and composers alike.
Abstract: Due to the swift advancement of artificial intelligence and deep learning technologies, computers are assuming an increasingly prominent role in the realm of music composition, thereby fueling innovations in techniques for music generation. Deep learning models such as RNNs, LSTMs, Transformers, and diffusion models have demonstrated outstanding pe...
Show More
-
Research Article
Resolving Technological Barriers: Development Strategies for Technological Competitive Intelligence of Technology Based Enterprises
Issue:
Volume 9, Issue 3, September 2025
Pages:
163-170
Received:
3 June 2025
Accepted:
16 June 2025
Published:
16 July 2025
Abstract: In the face of unprecedented global changes, some countries are moving against the trend of globalization, relying on their own technological strength to continuously create technological barriers and disrupt the order of technology enterprises' industrial and supply chains, and have had a serious adverse impact on the recovery of the world economy and the development of high-tech industries. In response to this situation, this article proposes the problems brought about by this situation through literature research, including the need for technology-based enterprises to further strengthen the development of technology competitive intelligence, solve the problems of insufficient technology competitive intelligence institutions, lack of high-level professional talents, and low comprehensive development capabilities of technology competitive intelligence in enterprises. At the same time, this article proposes a practical path to solve the existing problems - by improving the technology competitive intelligence network, establishing technology competitive intelligence agencies, and leveraging the technology competitive intelligence development role of high-tech industrial development zones, the adverse effects of technological barriers on enterprises can be effectively eliminated, and the technological accumulation, research and development, and innovation capabilities of enterprises can be improved. These measures can to some extent alleviate the negative impact of technological barriers and stabilize the industrial and supply chains of high-tech enterprises.
Abstract: In the face of unprecedented global changes, some countries are moving against the trend of globalization, relying on their own technological strength to continuously create technological barriers and disrupt the order of technology enterprises' industrial and supply chains, and have had a serious adverse impact on the recovery of the world economy...
Show More
-
Research Article
A Comprehensive Test Plan for Natural Language Processing Preprocessing Functions
Partha Majumdar*
Issue:
Volume 9, Issue 3, September 2025
Pages:
171-193
Received:
29 July 2025
Accepted:
8 August 2025
Published:
26 August 2025
Abstract: This paper outlines a comprehensive testing strategy for validating key natural language processing (NLP) preprocessing functions, specifically preprocess() and get_tokens(). These functions are vital for ensuring high-quality input data in NLP workflows. Recognising the influence of preprocessing on subsequent model performance, the plan employs a layered testing approach that includes functional, edge-case, negative, and property-based tests. It emphasises goals such as ensuring functional correctness, robustness, semantic integrity, and idempotency, supported by thorough test cases and automation with pytest and hypothesis. By systematically tackling pipeline fragility, this framework aims to ensure the reliability and reproducibility of NLP preprocessing, laying the groundwork for dependable, production-ready language models.
Abstract: This paper outlines a comprehensive testing strategy for validating key natural language processing (NLP) preprocessing functions, specifically preprocess() and get_tokens(). These functions are vital for ensuring high-quality input data in NLP workflows. Recognising the influence of preprocessing on subsequent model performance, the plan employs a...
Show More
-
Research Article
Architecting an Integrated AI Platform for the Apparel Industry
Partha Majumdar*
Issue:
Volume 9, Issue 3, September 2025
Pages:
194-210
Received:
25 July 2025
Accepted:
11 August 2025
Published:
13 September 2025
DOI:
10.11648/j.ajist.20250903.14
Downloads:
Views:
Abstract: This research article proposes an integrated AI platform designed to revolutionise the apparel industry. The platform, envisioned as a comprehensive ecosystem, aims to enhance every stage of the apparel value chain, from design ideation to marketing and supply chain management. The architecture is built around three core components: an AI-Powered Design Studio, an Intelligent Production & Supply Chain Backbone, and a Hyper-Personalised Marketing & Engagement Engine. The AI-Powered Design Studio leverages generative AI, deep learning, and computer vision to transform the design process. A Trend Forecasting Engine utilises diverse data sources (social media, e-commerce, runway shows) to predict trends with high accuracy, providing data-driven insights that directly inform the AI Co-Creation Suite. This suite employs GANs, diffusion models, and sketch-to-image translation to generate design variations, refine concepts, and simulate fabric drape and fit, resulting in faster design cycles and commercially viable products. A Hyper-Personalisation Module further enhances this by generating personalised designs tailored to individual customer styles and body measurements, bridging the "aspiration gap" between desired and attainable styles. The Intelligent Production & Supply Chain Backbone focuses on efficiency and sustainability. A Material Optimisation and Waste Reduction System uses computer vision to detect fabric defects and optimise cutting layouts, minimising waste. Predictive inventory management and AI-powered logistics orchestration, combined with blockchain technology for enhanced traceability, create a responsive and transparent supply chain. Automated quality control, utilising computer vision, reduces defects and enables predictive maintenance of machinery, optimising production efficiency. A Sustainability and Circularity Management Dashboard provides a holistic view of environmental and social impact, facilitating data-driven decision-making and transparency for consumers. Finally, the Hyper-Personalised Marketing & Engagement Engine uses AI to deliver tailored experiences. A Personalised Marketing and Dynamic Campaign Engine, powered by a Customer Data Platform (CDP) and a "Latent Style" algorithm, provides personalised product recommendations, marketing messages, and promotions. An Automated Content Generation Engine generates marketing assets (product descriptions, social media posts, email copy) at scale, while a Unified Consumer Insights Platform provides real-time market analysis. Conversational AI, through chatbots and virtual stylists, enhances customer support and creates personalised interactions. The use of 3D digital twins and virtual prototyping, including virtual try-on (VTO) capabilities, enhances consumer engagement and reduces return rates. The article concludes with a phased implementation roadmap, prioritising data infrastructure, key modules with high ROI (such as waste reduction), and subsequent integration of generative design and personalisation. The overall goal is a symbiotic relationship between human creativity and AI's efficiency, resulting in a future-ready apparel industry characterised by enhanced speed, sustainability, and personalisation. Case studies of industry leaders like Zara, Stitch Fix, H&M, and Nike illustrate the successful application of similar AI strategies.
Abstract: This research article proposes an integrated AI platform designed to revolutionise the apparel industry. The platform, envisioned as a comprehensive ecosystem, aims to enhance every stage of the apparel value chain, from design ideation to marketing and supply chain management. The architecture is built around three core components: an AI-Powered D...
Show More
-
Research Article
The Accuracy-Interpretability Dilemma: A Strategic Framework for Navigating the Trade-off in Modern Machine Learning
Partha Majumdar*
Issue:
Volume 9, Issue 3, September 2025
Pages:
211-224
Received:
6 August 2025
Accepted:
15 August 2025
Published:
13 September 2025
DOI:
10.11648/j.ajist.20250903.15
Downloads:
Views:
Abstract: This paper explores the enduring accuracy-interpretability trade-off in machine learning, highlighting its profound implications for model selection, regulatory compliance, and practical deployment across diverse industries. It begins by defining accuracy as a model’s ability to generalise effectively on unseen data, measured through context-specific metrics. It contrasts it with interpretability, which ensures that model predictions are understandable and justifiable to human stakeholders. The paper maps models across the white-box to black-box spectrum, from inherently transparent techniques such as linear regression and decision trees to opaque but highly accurate methods like ensemble models and deep neural networks. It critiques the conventional view that increasing accuracy necessarily diminishes interpretability, presenting alternative perspectives such as the Rashomon effect, which suggests that equally accurate yet interpretable models often exist within the solution space. The paper emphasises two pathways: interpretability-by-design approaches, such as Generalised Additive Models and sparse decision trees, and post-hoc explainability tools like LIME and SHAP that enhance transparency in black-box models. Industry case studies in finance, healthcare, algorithmic trading, and business strategy illustrate the context-dependent balance between performance and explainability, shaped by legal mandates, trust requirements, and operational priorities. The framework proposed equips practitioners with strategic questions to guide model selection, incorporating considerations of compliance, end-user needs, and the relative costs of errors versus missed insights. The paper also anticipates future advancements in Explainable AI, inherently interpretable architectures, and causal machine learning that could dissolve the trade-off altogether by achieving high accuracy without sacrificing transparency. By reframing the dilemma as a strategic decision rather than a rigid constraint, it provides a structured roadmap for aligning model development with business objectives, ethical imperatives, and stakeholder trust, advocating a shift towards accuracy and interpretability as complementary rather than competing goals.
Abstract: This paper explores the enduring accuracy-interpretability trade-off in machine learning, highlighting its profound implications for model selection, regulatory compliance, and practical deployment across diverse industries. It begins by defining accuracy as a model’s ability to generalise effectively on unseen data, measured through context-specif...
Show More