71
0
0
2025-08-04

Special Issue on Foundation and Large Language Models Submission Date: 2025-09-30 Guest editors:


Yaser Jararweh, Jordan University of Science and Technology, Irbid, Jordan (yaser.amd@gmail.com)

Sandra Sendra, Polytechnic University of Valencia, Valencia, Spain (sansenco@upv.es)

Safa Otoum, Zayed University, Dubai, UAE (safa.otoum@zu.ac.ae )

Yoonhee Kim, Sookmyung Women's University, Korea (yulan@sookmyung.ac.kr)


Special issue information:


Background and Scope:


With the emergence of foundation models (FMs) and Large Language Models (LLMs) that are trained on large amounts of data at scale and adaptable to a wide range of downstream applications, Artificial intelligence is experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-4, Falcon 180B, Codex, DALL-E, Whisper, and CLIP are now the foundation for new applications ranging from computer vision to protein sequence study and from speech recognition to coding. Earlier models had a reputation of starting from scratch with each new challenge. The capacity to experiment with, examine, and comprehend the capabilities and potentials of next-generation FMs is critical to undertaking this research and guiding its path. Nevertheless, these models are currently inaccessible as the resources required to train these models are highly concentrated in industry, and even the assets (data, code) required to replicate their training are frequently not released due to their demand in the real-time industry. At the moment, mostly large tech companies such as OpenAI, Google, Facebook, and Baidu can afford to construct FMs and LLMS. Despite the expected widely publicized use of FMs and LLMS, we still lack a comprehensive knowledge of how they operate, why they underperform, and what they are even capable of because of their emerging global qualities. To deal with these problems, we believe that much critical research on FMs and LLMS would necessitate extensive multidisciplinary collaboration, given their essentially social and technical structure.


Recommended Topics:


Architectures and Systems

Transformers and Attention

Bidirectional Encoding

Autoregressive Models

Prompt Engineering

Multimodal LLMs

Fine-tuning

Challenges

Hallucination

Safety and Trustworthiness

Interpretability

Fairness

Social Impact

Future Directions

Generative AI

Explainability and EXplainable AI

Retrieval Augmented Generation (RAG)

Federated Learning for FLLM

Large Language Models Fine-Tuning on Graphs

Data Augmentation

Applications

Natural Language Processing

Communication Systems

Security and Privacy

Image Processing and Computer Vision

Life Sciences

Financial Systems


Manuscript submission information:


Important Dates


Manuscript submission due date: September 30th, 2025

Author First notification: October 21st, 2025

登录用户可以查看和发表评论, 请前往  登录 或  注册


SCHOLAT.com 学者网
免责声明 | 关于我们 | 用户反馈
联系我们: