?url_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rft.title=DePT%3A+Decomposed+Prompt+Tuning+for+Parameter-Efficient+Fine-tuning&rft.creator=Shi%2C+Z&rft.creator=Lipani%2C+A&rft.description=Prompt+tuning+(PT)%2C+where+a+small+amount+of+trainable+soft+(continuous)+prompt+vectors+is+affixed+to+the+model+input%2C+has+shown+promising+results+across+various+tasks+and+model+architecture+for+parameter-efficient+fine-tuning+(PEFT).+PT+stands+out+from+other+PEFT+approaches+because+it+maintains+competitive+performance+with+fewer+trainable+parameters+and+does+not+drastically+scale+up+its+parameters+as+the+model+size+expands.+However%2C+PT+introduces+extra+soft+prompt+tokens%2C+leading+to+longer+input+sequences%2C+which+significantly+impacts+training%2Finference+time+and+memory+usage+due+to+the+Transformer's+quadratic+complexity.+Particularly+concerning+for+Large+Language+Models+(LLMs)+that+face+heavy+daily+querying.+To+address+this+issue%2C+we+propose+Decomposed+Prompt+Tuning+(DEPT)%2C+which+decomposes+the+soft+prompt+into+a+shorter+soft+prompt+and+a+pair+of+low-rank+matrices+that+are+then+optimised+with+two+different+learning+rates.+This+allows+DEPT+to+achieve+better+performance+while+saving+substantial+memory+and+time+costs+compared+to+vanilla+PT+and+its+variants%2C+without+changing+trainable+parameter+sizes.+Through+extensive+experiments+on+23+natural+language+processing+(NLP)+and+vision-language+(VL)+tasks%2C+we+demonstrate+that+DEPT+outperforms+state-of-the-art+PEFT+approaches%2C+including+the+full+fine-tuning+baseline%2C+in+some+scenarios.+Additionally%2C+we+empirically+show+that+DEPT+grows+more+efficient+as+the+model+size+increases.+Our+further+study+reveals+that+DEPT+integrates+seamlessly+with+parameter-efficient+transfer+learning+in+the+few-shot+learning+setting+and+highlights+its+adaptability+to+various+model+architectures+and+sizes.&rft.subject=Natural+Language+Processing%2C+Large+Language+Models%2C+Parameter-efficient+Fine-tuning&rft.publisher=International+Conference+on+Learning+Representations+(ICLR)&rft.date=2024-05-11&rft.type=Proceedings+paper&rft.language=eng&rft.source=+++++In%3A++12th+International+Conference+on+Learning+Representations%2C+ICLR+2024.++++International+Conference+on+Learning+Representations+(ICLR)%3A+Vienna%2C+Austria.+(2024)+++++&rft.format=text&rft.identifier=https%3A%2F%2Fdiscovery.ucl.ac.uk%2Fid%2Feprint%2F10195822%2F1%2F725_DePT_Decomposed_Prompt_Tun.pdf&rft.identifier=https%3A%2F%2Fdiscovery.ucl.ac.uk%2Fid%2Feprint%2F10195822%2F&rft.rights=open