全部 |
  • 全部
  • 题名
  • 作者
  • 机构
  • 关键词
  • NSTL主题词
  • 摘要
检索 二次检索 AI检索
外文文献 中文文献
筛选条件:

1. VL-MPFT: Multitask Parameter-Efficient Fine-Tuning for Visual-Language Pre-trained Models via Task-Adaptive Masking NSTL国家科技图书文献中心

Min Zhu |  Guanming Liu... -  《Pattern Recognition and Computer Vision,Part V》 -  Chinese Conference on Pattern Recognition and Computer Vision - 2025, - 379~394 - 共16页

摘要:Parameter-efficient fine-tuning (PEFT) has | , existing parameter-efficient fine-tuning methods for VLMs |  parameter-efficient fine-tuning for visual-language pre |  selectively updating a small subset for fine-tuning. However |  fine-tuning accordingly. In this paper, the proposed
关键词: Multimodel learning |  Multitask learning |  Parameter-efficient finetune-tuning

3. Isolation and Integration: A Strong Pre-trained Model-Based Paradigm for Class-Incremental Learning NSTL国家科技图书文献中心

Wei Zhang |  Yuan Xie... -  《Computational visual media: 12th International Conference, CVM 2024, Wellington, New Zealand, April 10-12, 2024, Proceedings, Part II》 -  International Conference Computational Visual Media - 2024, - 302~315 - 共14页

摘要:-efficient tuning technique to finetune the pre-trained |  incremental learning framework, which leverages parameter | Continual learning aims to effectively learn |  from streaming data, adapting to emerging new classes |  without forgetting old ones. Conventional models without
关键词: Continual learning |  Class-Incremental learning |  Pre-Trained models |  Parameter-Efficient tuning

4. FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models NSTL国家科技图书文献中心

Jingwei Sun |  Ziyue Xu... -  《2024 International Conference on Machine Learning,Part 57 of 75》 -  International Conference on Machine Learning - 2024, - 47159~47173 - 共15页

摘要: efficient, privacy-preserving fine-tuning of PLM in the | -tuning on specific data to cater to distinct downstream |  model fine-tuning without centralized data collection | . However, applying FL to finetune PLMs is hampered by |  challenges, including restricted model parameter access due

5. Parameter-Efficient Tuning Makes a Good Classification Head NSTL国家科技图书文献中心

Zhuoyi Yang |  Ming Ding... -  《Conference on Empirical Methods in Natural Language Processing, Part 11: Conference on Empirical Methods in Natural Language Processing (EMNLP 2022), 7-11 December 2022, Abu Dhabi, UAE and Online》 -  Conference on Empirical Methods in Natural Language Processing - 2022, - 7576~7586 - 共11页

摘要: with parameter-efficient tuning consistently improves | ) ineffective. In this paper, we find that parameter-efficient | , and finetune the whole model. As the pretrained |  tuning makes a good classification head, with which we | In recent years, pretrained models

6. Adaptable Adapters NSTL国家科技图书文献中心

Nafise Sadat Moosavi |  Quentin Delfosse... -  《Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 5: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2022), 10-15 July 2022, Online》 -  Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - 2022, - 3742~3753 - 共12页

摘要: provide a parameter-efficient alternative for the full |  fine tuning in which we can only finetune lightweight |  adapters for designing efficient and effective adapter |  are therefore more efficient at training and | State-of-the-art pretrained NLP models contain

7. Efficient Fine-Tuning of Large Language Models via a Low-Rank Gradient Estimator

Zhang, Luoming |  Lou, Zhenyu... -  《Applied Sciences》 - 2025,15(1) - 共16页

摘要: models (LLMs). Unlike Parameter-Efficient Fine-Tuning |  Estimator (LoGE) to accelerate the finetune-time |  number of fine-tuning parameters, LoGE also | In this paper, we present a Low-Rank Gradient |  computation of transformers, especially large language
关键词: low rank |  gradient estimator |  efficient fine-tuning |  efficient AI

8. Self-Supervised Feature Learning Method for Hyperspectral Images Based on Mixed Convolutional Networks

Feng, Fan |  Zhang, Yongsheng... -  《Guangxue Xuebao》 - 2024,44(18) - 共12页

摘要:. Secondly, an efficient cascade feature fusion encoder is |  object are utilized for fine-tuning, and the overall |  learning. Third, the full-finetune approach is more |  reduce parameter redundancy and improve parameter | Objective Hyperspectral images record the
关键词: hyperspectral image classification |  self- supervised learning |  contrastive learning |  mixed convolutional network |  second- order pooling
检索条件Parameter-efficient finetune-tuning
  • 检索词扩展

NSTL主题词

  • NSTL学科导航