全部 |
  • 全部
  • 题名
  • 作者
  • 机构
  • 关键词
  • NSTL主题词
  • 摘要
检索 二次检索 AI检索
外文文献 中文文献
筛选条件:

1. Pre-Trained Models for Search and Recommendation: Introduction to the Special Issue—Part 1 NSTL国家科技图书文献中心

Wenjie Wang |  Zheng Liu... -  《ACM transactions on information systems》 - 2025,43(2) - 27.1~27.6 - 共6页

摘要: variety of studies on the application of pre-trained | , but are not limited to, leveraging pre-trained |  models to search and recommendation, covering both |  models for fine-grained user representation learning | In summary, this special issue contains a
关键词: Pre-trained Models |  Search |  Recommendation |  Large Language Models |  Trustworthiness

2. Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need NSTL国家科技图书文献中心

Zhou, Da-Wei |  Cai, Zi-Wen... -  《International Journal of Computer Vision》 - 2025,133(3) - 1012~1032 - 共21页

摘要: pre-trained models (PTMs) accessible for CIL | . Traditional CIL models are trained from scratch to |  pre-trained and downstream datasets, PTM can be |  continually acquire knowledge as data evolves. Recently, pre |  the embeddings of PTM and adapted models for
关键词: Class-incremental learning |  Pre-trained models |  Continual learning |  Catastrophic forgetting

3. Using Embeddings of Pre-trained Models for Cross-Database Dysarthria Detection: Supervised vs. Self-supervised Approach NSTL国家科技图书文献中心

Sally Ismail |  Margarita Anastassov...... -  《Machine Learning,Optimization,and Data Science,Part II》 -  International Conference on Machine Learning,Optimization,and Data Science - 2025, - 229~239 - 共11页

摘要: weakly supervised, pre-trained models, as feature | : SVM and a neural network. The pre-trained models | One of the main obstacles facing the research |  in the automatic detection of voice pathology is |  the lack of data. In this work, we try to tackle
关键词: Self-supervised learning |  Voice pathology |  Deep learning |  Pre-trained models |  Machine learning |  Wav2vec2 |  HuBERT |  Whisper

4. SDPT: Synchronous Dual Prompt Tuning for Fusion-Based Visual-Language Pre-trained Models NSTL国家科技图书文献中心

Yang Zhou |  Yongjian Wu... -  《Computer Vision - ECCV 2024,Part XLIV》 -  European Conference on Computer Vision - 2025, - 340~356 - 共17页

摘要: fusion-based visual-language pre-trained models (VLPMs | -trained models. However, their application to dual-modal |  success in parameter-efficient fine-tuning on large pre | Prompt tuning methods have achieved remarkable | ), such as GLIP, has encountered issues. Existing prompt
关键词: Prompt tuning |  Parameter-efficient fine-tuning |  Visual-language pre-trained models

5. Adapt and Refine: A Few-Shot Class-Incremental Learner via Pre-Trained Models NSTL国家科技图书文献中心

Sunyuan Qiang |  Zhu Xiong... -  《Pattern Recognition and Computer Vision,Part I》 -  Chinese Conference on Pattern Recognition and Computer Vision - 2025, - 431~444 - 共14页

摘要: powerful representational capabilities of pre-trained |  models (PTMs). Subsequently, we further adapt and | The intricate and ever-changing nature of the |  real world imposes greater demands on neural networks | , necessitating the rapid assimilation of fleeting new concepts
关键词: Few-shot class-incremental learning (FSCIL) |  Catastrophic forgetting |  Pre-trained models (PTMs)

6. Acoustic Classification of Bird Species Using Improved Pre-trained Models NSTL国家科技图书文献中心

Jie Xie |  Mingying Zhu... -  《PRICAI 2024,Part I》 -  Pacific Rim International Conference on Artificial Intelligence - 2025, - 375~382 - 共8页

摘要:-trained ImageNet models for acoustic classification of |  bird species. However, applying pre-trained ImageNet |  novel classification framework based on a pre-trained |  identification. Previous studies have explored various pre |  models directly to audio classification tasks presents
关键词: Bird sound classification |  Pre-trained model |  Attention |  GRU

7. Introducing Structural Information of Argumentative Essays Into Pre-trained Models NSTL国家科技图书文献中心

Chuhan Wang |  Dailin Li... -  《Natural Language Processing and Chinese Computing,Part V》 -  CCF International Conference on Natural Language Processing and Chinese Computing - 2025, - 365~376 - 共12页

摘要: significant differences in predictions between models |  simultaneously among models with varying parameter sizes |  inference from large language models. By integrating these | Argument Mining (AM) is a significant task in |  the field of Natural Language Processing (NLP
关键词: Argument components identification |  Pre-trained language model |  Ensemble learning

8. Exploiting Pre-Trained Models and Low-Frequency Preference for Cost-Effective Transfer-based Attack NSTL国家科技图书文献中心

MINGYUAN FAN |  CEN CHEN... -  《ACM transactions on knowledge discovery from data》 - 2025,19(2) - 52.1~52.18 - 共18页

摘要: employing pre-trained models as proxy models, which are | ), which addresses the optimization task in pre-trained |  prices of training proxy models also leads to |  models. CTA can be unleashed against broad applications |  target models, regardless of their architecture or
关键词: Deep Neural Networks |  Adversarial Examples |  Black-box Adversarial Attacks |  Transferability

9. Leveraging Pre-trained Models for Robust Federated Learning for Kidney Stone Type Recognition NSTL国家科技图书文献中心

Ivan Reyes-Amezcua |  Michael Rojas-Ruiz... -  《Advances in Computational Intelligence,Part II》 -  Mexican International Conference on Artificial Intelligence - 2025, - 168~181 - 共14页

摘要: degradation. Using pre-trained models, this research | . This highlights the potential of merging pre-trained |  data privacy. However, FL models are susceptible to |  models with FL to address privacy and performance | Deep learning developments have improved
关键词: Federated learning |  Robustness |  Medical imaging

10. A System of Multimodal Image-Text Retrieval Based on Pre-Trained Models Fusion NSTL国家科技图书文献中心

Qiang Li |  Feng Zhao... -  《Concurrency and computation: practice and experience》 - 2025,37(3) - e8345.1~e8345.11 - 共11页

摘要: fusion of pre-trained models. Firstly, we enhance the |  challenge for the models generalization performance and |  to improve the models generalization performance |  three different models. Finally, we construct a |  models based on the specific characteristics of the
关键词: Chinese-CLIP |  image-text retrieval |  MixGen |  model fusion |  multimodal
检索条件Pre-trained models
  • 检索词扩展

NSTL主题词

  • NSTL学科导航