NA

CVE-2024-28859

Published: 15/03/2024 Updated: 17/03/2024

Vulnerability Summary

Symfony1 is a community fork of symfony 1.4 with DIC, form enhancements, latest Swiftmailer, better performance, composer compatible and PHP 8 support. Symfony 1 has a gadget chain due to vulnerable Swift Mailer dependency that would enable an malicious user to get remote code execution if a developer unserialize user input in his project. This vulnerability present no direct threat but is a vector that will enable remote code execution if a developper deserialize user untrusted data. Symfony 1 depends on Swift Mailer which is bundled by default in vendor directory in the default installation since 1.3.0. Swift Mailer classes implement some `__destruct()` methods. These methods are called when php destroys the object in memory. However, it is possible to include any object type in `$this->_keys` to make PHP access to another array/object properties than intended by the developer. In particular, it is possible to abuse the array access which is triggered on foreach($this->_keys ...) for any class implementing ArrayAccess interface. This may allow an malicious user to execute any PHP command which leads to remote code execution. This issue has been addressed in version 1.5.18. Users are advised to upgrade. There are no known workarounds for this vulnerability.

Recent Articles

OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories
The Register

Topics Security Off-Prem On-Prem Software Offbeat Special Features Vendor Voice Vendor Voice Resources While some other LLMs appear to flat-out suck How to weaponize LLMs to auto-hijack websites

AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed. In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists – Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang – report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describ...