Severity | Number of Issues | |
---|---|---|
HIGH | 166 | |
MEDIUM | 28 | |
LOW | 64 |
Rule | Severity | Component | Line | Description | Message | Key | Status |
---|---|---|---|---|---|---|---|
secrets:S6706 | HIGH | lib/insecurity.ts | 23 | Cryptographic private keys should not be disclosed | Make sure this private key gets revoked, changed, and removed from the code. | 8babbfe3-a445-48ea-b699-06a43e96b5c4 | OPEN |
typescript:S2068 | HIGH | frontend/src/app/Services/two-factor-auth-service.spec.ts | 64 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 0a83776f-db83-46de-ae12-3c38b6781c6b | TO_REVIEW |
typescript:S2068 | HIGH | frontend/src/app/Services/two-factor-auth-service.spec.ts | 80 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | e69ccfee-a613-42c1-bd3e-b91b31171149 | TO_REVIEW |
typescript:S2068 | HIGH | frontend/src/app/oauth/oauth.component.spec.ts | 85 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | ab62012d-05a6-4443-8a13-32df3679cae2 | TO_REVIEW |
typescript:S2068 | HIGH | frontend/src/app/oauth/oauth.component.spec.ts | 85 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | f9095c9b-a2a3-4c24-938d-9a62bc7780c8 | TO_REVIEW |
typescript:S2068 | HIGH | frontend/src/app/oauth/oauth.component.spec.ts | 92 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 0886c151-23ef-4658-8f05-1e89d4127890 | TO_REVIEW |
typescript:S2068 | HIGH | frontend/src/app/register/register.component.spec.ts | 117 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | acdfbe0b-e902-412f-a60d-3ca6378c1e19 | TO_REVIEW |
typescript:S2068 | HIGH | frontend/src/app/register/register.component.spec.ts | 135 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | dc78fb30-21b8-41d7-94c5-af90ba5d3d94 | TO_REVIEW |
typescript:S2068 | HIGH | frontend/src/app/register/register.component.spec.ts | 136 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 9801095c-5cfb-4166-847d-fa49364f883b | TO_REVIEW |
typescript:S2068 | HIGH | frontend/src/app/register/register.component.spec.ts | 153 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 5b08330d-753c-4e10-8f63-472d32bec290 | TO_REVIEW |
typescript:S2068 | HIGH | frontend/src/app/register/register.component.spec.ts | 153 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 60a6a2a0-7816-4e0d-bc07-6f913f477345 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/2faSpec.ts | 169 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 13683cc7-1330-4013-8444-2a9632d62032 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/2faSpec.ts | 195 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 285454eb-ae45-4f17-9bd4-75dd9c8fbf2a | TO_REVIEW |
typescript:S2068 | HIGH | test/api/addressApiSpec.ts | 20 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | b0d58cbb-8b67-42a5-8300-8a1ced11f14e | TO_REVIEW |
typescript:S2068 | HIGH | test/api/basketApiSpec.ts | 25 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 9759796f-609b-4aab-aac0-bc44031160e2 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/basketApiSpec.ts | 101 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | ce995605-02bc-4722-85cc-b27b3ed22782 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/basketItemApiSpec.ts | 21 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | c66392fd-7dfb-4f12-8ef9-7dd8f44f1766 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/chatBotSpec.ts | 56 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 262d9cce-29da-4931-9b73-b18e053e5656 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/chatBotSpec.ts | 77 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | f597082c-1de7-4e4a-b1b6-9e5a9269af0f | TO_REVIEW |
typescript:S2068 | HIGH | test/api/chatBotSpec.ts | 108 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | f92b3c40-8b34-4415-a234-6bd3aae82152 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/chatBotSpec.ts | 140 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 7169f4a1-d6a9-4c32-969f-a7fd24ad36d5 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/chatBotSpec.ts | 174 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | dc8c0f74-1f5e-4b34-b1b1-b4ed07315b72 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/chatBotSpec.ts | 205 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | f7c6a258-464a-413b-8ad0-2ae42bd591db | TO_REVIEW |
typescript:S2068 | HIGH | test/api/chatBotSpec.ts | 250 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 850d2b8b-78b5-498c-9301-639f83e2ea36 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/chatBotSpec.ts | 287 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 651254c0-40f5-4b04-b1a1-7855f93428cb | TO_REVIEW |
typescript:S2068 | HIGH | test/api/chatBotSpec.ts | 295 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 7e0868c6-89dd-482c-807e-f3ffdf1c0f95 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/dataExportApiSpec.ts | 22 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 98e35852-904b-4f59-a573-22b76d57e842 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/dataExportApiSpec.ts | 49 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 87e85df2-2acd-46cb-b0fc-84c75b3ec363 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/dataExportApiSpec.ts | 78 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d9879314-2c25-4c37-93d3-b25969cf7baf | TO_REVIEW |
typescript:S2068 | HIGH | test/api/dataExportApiSpec.ts | 113 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 04d652ea-f690-40e9-a177-37499588d553 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/dataExportApiSpec.ts | 153 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 7de6615c-9959-48f5-b972-7fc3aa71f968 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/dataExportApiSpec.ts | 195 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 005be7fb-9bff-4242-9c62-ea1a78b9015d | TO_REVIEW |
typescript:S2068 | HIGH | test/api/dataExportApiSpec.ts | 235 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | bd945745-7b4a-4801-8306-f9c2666a1526 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/dataExportApiSpec.ts | 283 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 3dc22111-e21c-4d71-a4d4-063135359737 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/dataExportApiSpec.ts | 333 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | f2527c94-ee00-4c86-865e-0a7a2ece6b7a | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deliveryApiSpec.ts | 23 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 8c92a350-3cd4-4d4d-86a7-67fc9d8787be | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deliveryApiSpec.ts | 52 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 87c70984-23df-4b57-8759-ddbb92caa20e | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deliveryApiSpec.ts | 83 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 947d2e16-e355-4209-a284-475fc6e7a873 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deliveryApiSpec.ts | 111 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 28dbde46-7a50-45f0-aa4b-f1ca773834d9 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deluxeApiSpec.ts | 35 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d72634c6-c4f1-4434-99b2-4114ae2d787e | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deluxeApiSpec.ts | 53 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 5dafd217-ecb6-40b2-b2ae-b1d69389821a | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deluxeApiSpec.ts | 71 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | fe0891cd-5213-4b9f-ad01-fab70be28969 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deluxeApiSpec.ts | 89 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 5f4f9cc1-ff36-4503-990c-27b07a4cc446 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deluxeApiSpec.ts | 105 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 68506eb4-e3c1-4e25-ba09-bc8d97cc7435 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deluxeApiSpec.ts | 129 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 0daeed05-d0dd-45e4-b947-d60d391997db | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deluxeApiSpec.ts | 149 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | fe0785d8-bdea-4b78-adcc-b09158cf12fb | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deluxeApiSpec.ts | 170 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 383f4614-2709-48cb-b221-5dc5835a223a | TO_REVIEW |
typescript:S2068 | HIGH | test/api/deluxeApiSpec.ts | 191 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 230c2da2-6cad-482b-b2b3-68085a9e2c67 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/erasureRequestApiSpec.ts | 18 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 796833f1-7886-402f-a31f-6b3ab187a346 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/erasureRequestApiSpec.ts | 37 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 7b65915a-7183-4ee7-8b5a-5ef97954f886 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/erasureRequestApiSpec.ts | 64 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | b62a8769-3049-4d83-9c23-1b04bd3aead2 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/erasureRequestApiSpec.ts | 80 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | c94dfbe2-4e90-4878-b097-4536b903b537 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/erasureRequestApiSpec.ts | 99 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | adfb7228-1f25-4b30-93a0-777504b719a7 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/erasureRequestApiSpec.ts | 119 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | cf01746d-d34a-4f6c-9f51-19fd88b50f66 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/erasureRequestApiSpec.ts | 140 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 05bdebe0-c4be-49e2-8ff6-99d15bd0aabd | TO_REVIEW |
typescript:S2068 | HIGH | test/api/feedbackApiSpec.ts | 119 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 6a2f361b-5fb4-46c8-b0e2-38582f75aef9 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/feedbackApiSpec.ts | 152 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 24e9b6be-d005-4564-99b3-369764819094 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 21 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 108b782b-f4cb-4fb6-9e78-90ec909b12ec | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 30 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 149a19b8-fe5b-4470-a3c3-149f192e180b | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 46 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 8536c7a8-c91f-422f-94a8-58aa3e81cc4f | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 64 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 1546fb06-e500-4c26-92d7-d46d82a159ae | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 79 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 92ebe67d-120a-4373-a734-5aaca0798230 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 94 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | e159cd51-b073-4f3e-a89f-b401d53a304a | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 109 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 004caff7-ad6c-4d36-ba48-34d868f51533 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 124 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | bc511b17-c3c1-4c39-bda3-88c3d905d21b | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 142 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | df3b7ad7-d539-4528-92ec-452c2ca99d12 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 245 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d0169d04-b7b8-4c49-912e-1cc3d602beba | TO_REVIEW |
typescript:S2068 | HIGH | test/api/loginApiSpec.ts | 266 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 746dafd1-95f5-4268-b054-d1fa9d2d2c26 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/memoryApiSpec.ts | 26 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 46cc3061-a0e5-4641-ba7f-71da0ec27875 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/memoryApiSpec.ts | 64 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | dd36987b-2f92-46f6-9594-9accb1f356e8 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/memoryApiSpec.ts | 91 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 2cca2b86-b3d9-4267-b3b9-bcf711552a2d | TO_REVIEW |
typescript:S2068 | HIGH | test/api/orderHistoryApiSpec.ts | 19 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 2856ea55-ef65-4508-ab5c-de59668cb426 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/orderHistoryApiSpec.ts | 56 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 32b8672f-3709-4e3e-9f52-94b0a288c58f | TO_REVIEW |
typescript:S2068 | HIGH | test/api/orderHistoryApiSpec.ts | 73 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | a384cef7-92aa-4598-9f12-565f407c8816 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/orderHistoryApiSpec.ts | 90 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | b7067d7f-6096-4512-aa72-c17abd73e554 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/orderHistoryApiSpec.ts | 109 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | b976cdf4-3aba-4b42-9541-4b4aa204140e | TO_REVIEW |
typescript:S2068 | HIGH | test/api/orderHistoryApiSpec.ts | 129 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | aa69b445-23bc-424b-a980-c3a15cf2eac9 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/orderHistoryApiSpec.ts | 149 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 040bd4ec-517f-4275-94a9-f2d226de2c97 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/passwordApiSpec.ts | 20 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 39511da1-51ce-4a9c-b60d-ddd7f77ec9bd | TO_REVIEW |
typescript:S2068 | HIGH | test/api/passwordApiSpec.ts | 29 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 5cff6d8a-34d7-4362-9afa-6b7706de409b | TO_REVIEW |
typescript:S2068 | HIGH | test/api/passwordApiSpec.ts | 47 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | e375fe27-0379-4c76-bac6-66140142e0a7 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/passwordApiSpec.ts | 93 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 2af3e36a-ec5d-4093-81e7-23ae9e76b588 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/paymentApiSpec.ts | 20 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d1ea95d5-f09f-4d14-a62c-eb4c796694d8 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/productReviewApiSpec.ts | 111 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 1eaaa6f1-a97a-42d3-905d-c77badfff1ca | TO_REVIEW |
typescript:S2068 | HIGH | test/api/productReviewApiSpec.ts | 131 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 530c7b4a-2303-43e8-b916-d042fa2c8922 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/profileImageUploadSpec.ts | 25 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | dace1506-34c3-4636-90bd-4082368fd7ae | TO_REVIEW |
typescript:S2068 | HIGH | test/api/profileImageUploadSpec.ts | 52 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d7fbd7cd-3fde-403e-ae08-efc4b9d045b9 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/profileImageUploadSpec.ts | 97 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | fdd1b8e1-23bb-4a7d-a3a9-ebd8d119e951 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/profileImageUploadSpec.ts | 123 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | b40b0a25-79fb-447c-98f7-934d0ef93db4 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 21 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 61eed290-3cf4-41ee-8dd0-54eeb53c1fa4 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 38 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 7093815c-5d61-437d-8ea9-4edc1a97af1e | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 55 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | dd1db6e8-6973-47a6-8dae-af428fab65c7 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 72 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 13da27d2-b140-400d-aa11-0fcfea329568 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 93 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 007f79af-6c5b-400d-b3c9-6e864f8848a5 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 114 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 4646121f-fe10-42f1-a62b-32330454b517 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 137 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 5479405b-bf34-41e7-bcd9-67433858ca6a | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 155 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 959f416d-b2c5-481d-a84e-b16508d70175 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 173 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 3415909c-64ef-46ba-95a3-5b51d0fb270c | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 190 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | ef2746fa-e18e-4a20-802a-dbd531a74a98 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 207 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d176ef3a-b827-4bc8-b7c2-d0d572d36f08 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 228 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | e17b195b-2adb-4eb7-beaf-730223aec153 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 249 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 1328ebb2-ddf1-46d4-8571-9a641acaf91f | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 269 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | ab7bbeb5-9e68-4602-bc5a-f8ef113c2427 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 292 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 12b9e569-424f-4da8-b3df-892a292990c7 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 309 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 69086d04-21d2-4ee8-8a88-6c6e889ec310 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/quantityApiSpec.ts | 326 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | f283a5a4-6d0d-4cf1-a5b6-3e93e7c344fe | TO_REVIEW |
typescript:S2068 | HIGH | test/api/securityAnswerApiSpec.ts | 44 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 65dea2a5-df7d-40fe-984e-28345489a98d | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 42 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d753a55c-1ffa-422c-bb61-047a4374f6ae | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 60 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | cd290a18-772c-456d-a82f-e2660ba15d8b | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 82 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 9436e537-abfd-4959-bf21-748b9c83679a | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 100 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 79109e0a-6b2b-4e54-9bc2-97a699c2b53c | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 106 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 8d6aa985-324d-44b3-a073-bae710cf2a5b | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 118 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 3203d3c3-f32b-4769-920c-e5fb1b940300 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 136 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 0af72e6d-f45b-4c4d-b692-902b6cb6e5fb | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 158 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | e4bd33ba-36fc-4c30-8361-5c174cce0bb4 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 180 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | f6612e7b-7489-4186-b434-8c264f5b3b19 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 199 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 4dff97ee-54a3-4ee6-8bdd-e3410dab0e60 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 260 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 2e3dd1ee-d96d-45df-923f-9d4b2e2294c1 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userApiSpec.ts | 271 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | cd71467e-7dbf-476e-ad59-20b24f53afcc | TO_REVIEW |
typescript:S2068 | HIGH | test/api/userProfileSpec.ts | 19 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 7c989cbd-508b-4b48-a7d1-2faa2924bf60 | TO_REVIEW |
typescript:S2068 | HIGH | test/api/walletApiSpec.ts | 18 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | a3372cbc-7983-4bf6-8f5a-22c0f5a9722e | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/administration.spec.ts | 5 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 3d0779f8-b57b-4535-bc8c-f03f0c88461d | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/b2bOrder.spec.ts | 6 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d76b1b24-e296-4f5b-b023-347effde2711 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/b2bOrder.spec.ts | 37 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 35487a7d-8149-4c05-9904-13364c157499 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/basket.spec.ts | 4 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 4c544fd0-6bdc-40f5-a7c3-a1882f07c247 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/basket.spec.ts | 76 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 45662530-f216-4068-aecb-45b4812d0f0c | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/changePassword.spec.ts | 6 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 9bfd28ba-51ad-43e9-ab2a-0fd71960e525 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/changePassword.spec.ts | 25 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 445f7c4d-7f0d-469c-83fd-b3f7ee576433 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/changePassword.spec.ts | 31 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 3e9c9b18-8a93-4b71-b7de-a872b0adbbfa | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/chatbot.spec.ts | 3 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 370e42c6-346c-40ec-8d11-f0076ca44f33 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/complain.spec.ts | 5 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | b892739d-c13c-42b2-bd16-44ec08b3ed59 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/contact.spec.ts | 11 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 233c99ce-a5ea-4ee5-935b-ad77136e8a21 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/contact.spec.ts | 47 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 80839754-17b2-441a-94f9-b03ab0de6760 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/dataErasure.spec.ts | 3 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 78752162-7f4b-4cda-a89a-3e6b3e3c10b3 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/dataExport.spec.ts | 24 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | de8a1a1d-9133-435b-bd6b-939186cebe99 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/deluxe.spec.ts | 4 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d155e445-9c77-4081-8d17-d34c8bfaa09f | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/deluxe.spec.ts | 21 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | ae8d9605-d7bb-446e-85cd-7c790d9bb2e0 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/noSql.spec.ts | 8 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | b917a9ea-8a5e-4a9a-815e-abee6a596226 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/noSql.spec.ts | 53 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 5fb18b6d-a65d-4e69-b77a-3c037cc0e0d9 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/noSql.spec.ts | 76 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 09e5cdea-0414-48d3-91e0-ba5bf446b63b | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/noSql.spec.ts | 120 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | e09d1679-3882-45fb-b187-895689850419 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/profile.spec.ts | 3 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | ee873421-8e8c-4fed-9c56-b9702d72ec85 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/register.spec.ts | 10 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 0890898f-7176-433a-9566-802c448ff2c3 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/register.spec.ts | 28 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 6d311f8d-8aae-456d-b3ac-eb5d964e7315 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/register.spec.ts | 29 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | f4f2621f-7cf7-46a1-9f09-e1a6077bd7f6 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/register.spec.ts | 60 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 5acacd63-1f74-4361-baae-602489c68126 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/register.spec.ts | 61 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 9e0e7f1d-3d17-4966-89aa-3ea10521d077 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/register.spec.ts | 84 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 8b983d43-67c0-4c68-a89b-b2c40e061a48 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/restApi.spec.ts | 4 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 8b933160-f56e-4dcf-9f46-2328d3410b88 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/restApi.spec.ts | 82 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | 41b41231-d0c3-42d5-abfc-33608fce6e35 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/search.spec.ts | 56 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | f7b15e11-9c10-4d54-be06-fc37ab74dccd | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/search.spec.ts | 83 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | ed049805-d5d4-44a3-b965-1a680d44b443 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/totpSetup.spec.ts | 6 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | e2cfd822-f02b-4e6a-8be1-ea60fe9053f3 | TO_REVIEW |
typescript:S2068 | HIGH | test/cypress/e2e/totpSetup.spec.ts | 20 | Hard-coded credentials are security-sensitive | Review this potentially hardcoded credential. | d185238f-022d-46ee-9d2c-c88260695e07 | TO_REVIEW |
docker:S6504 | HIGH | Dockerfile | 47 | Allowing non-root users to modify resources copied to an image is security-sensitive | Make sure no write permissions are assigned to the executable. | 3a3c15d8-808b-4874-a16a-d92f3ba4ffed | TO_REVIEW |
docker:S6504 | HIGH | Dockerfile | 48 | Allowing non-root users to modify resources copied to an image is security-sensitive | Make sure no write permissions are assigned to the executable. | f26391c2-49e1-4887-88f1-fc7f3f2b18aa | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/about/about.component.ts | 84 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | 1827ad14-5181-46d8-b3c2-cc13fc4d9bf4 | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/administration/administration.component.ts | 50 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | a6125ed1-aa29-4b57-a964-c415f9a4f814 | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/administration/administration.component.ts | 65 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | 93e3e2b9-9032-4594-b400-e2f1cce4099e | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/data-export/data-export.component.ts | 45 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | 74d0aa59-191d-4e08-a7af-d3fd7f84be65 | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/last-login-ip/last-login-ip.component.ts | 36 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | f3fc02fb-b3eb-4e3e-8489-843a0c30ecd3 | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/score-board-legacy/score-board-legacy.component.ts | 216 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | 8bce838c-1ac9-41f2-b8a1-5d7a2d043014 | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/score-board/score-board.component.ts | 71 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | 9fab1e44-cc63-4d52-a5eb-b40e198ce901 | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/search-result/search-result.component.ts | 125 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | bc9efbe7-4839-4db6-b447-d02d36585c8d | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/search-result/search-result.component.ts | 151 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | d62e23ac-1c18-471e-a582-c9f782e1110a | TO_REVIEW |
typescript:S6268 | HIGH | frontend/src/app/track-result/track-result.component.ts | 41 | Disabling Angular built-in sanitization is security-sensitive | Make sure disabling Angular built-in sanitization is safe here. | 46166238-37f1-4d55-a3b8-d54e8894280a | TO_REVIEW |
typescript:S5852 | MEDIUM | frontend/src/app/change-password/change-password.component.ts | 36 | Using slow regular expressions is security-sensitive | Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. | 530623ff-7ebd-44fa-8c28-a2076bcca0fb | TO_REVIEW |
typescript:S5852 | MEDIUM | lib/codingChallenges.ts | 66 | Using slow regular expressions is security-sensitive | Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. | f69c1439-e7e1-4f87-be0a-2f3355a85e9f | TO_REVIEW |
typescript:S5852 | MEDIUM | lib/codingChallenges.ts | 67 | Using slow regular expressions is security-sensitive | Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. | 253c7b3e-c9ed-4b34-883f-96b8a2839b6a | TO_REVIEW |
typescript:S5852 | MEDIUM | lib/startup/registerWebsocketEvents.ts | 48 | Using slow regular expressions is security-sensitive | Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. | 3812b7e8-af7f-4e78-b94a-c86c319627e8 | TO_REVIEW |
typescript:S5852 | MEDIUM | lib/utils.ts | 216 | Using slow regular expressions is security-sensitive | Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. | d8fa42d1-0424-46f1-b854-847915a89b2d | TO_REVIEW |
typescript:S5852 | MEDIUM | routes/profileImageUrlUpload.ts | 19 | Using slow regular expressions is security-sensitive | Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. | 9c3733a9-a142-416e-bb25-6c4b9ab2b39d | TO_REVIEW |
typescript:S5852 | MEDIUM | server.ts | 227 | Using slow regular expressions is security-sensitive | Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. | d3b44980-2dcc-4061-8cf6-250c98e9157d | TO_REVIEW |
typescript:S5852 | MEDIUM | test/api/metricsApiSpec.ts | 17 | Using slow regular expressions is security-sensitive | Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. | 8bd95ebe-c4c2-43ed-9ba6-8bd447e6d81c | TO_REVIEW |
typescript:S5852 | MEDIUM | test/cypress/support/commands.ts | 36 | Using slow regular expressions is security-sensitive | Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. | ebdfbebd-1245-43de-a966-47b7939dfe30 | TO_REVIEW |
typescript:S5693 | MEDIUM | server.ts | 640 | Allowing requests with excessive content length is security-sensitive | Make sure the content length limit is safe here. | 07a8a345-d945-429e-b7a5-858839e9ea61 | TO_REVIEW |
typescript:S5693 | MEDIUM | server.ts | 646 | Allowing requests with excessive content length is security-sensitive | Make sure the content length limit is safe here. | 518f567b-dbac-4116-aec3-8b99e2b0acf2 | TO_REVIEW |
typescript:S5693 | MEDIUM | server.ts | 647 | Allowing requests with excessive content length is security-sensitive | Make sure the content length limit is safe here. | aa489f2d-5b22-4a13-86f7-7198b0769852 | TO_REVIEW |
docker:S6471 | MEDIUM | test/smoke/Dockerfile | 1 | Running containers as a privileged user is security-sensitive | The alpine image runs with root as the default user. Make sure it is safe here. | d26af71b-acc6-48b0-9fa3-29300fabcb0e | TO_REVIEW |
docker:S6470 | MEDIUM | Dockerfile | 2 | Recursively copying context directories is security-sensitive | Copying recursively might inadvertently add sensitive data to the container. Make sure it is safe here. | 23187cd0-d8dd-438c-9688-8bf26a74a1c6 | TO_REVIEW |
typescript:S1523 | MEDIUM | routes/captcha.ts | 23 | Dynamically executing code is security-sensitive | Make sure that this dynamic injection or execution of code is safe. | 497d7835-6cd3-4333-a69a-967c0617f97f | TO_REVIEW |
typescript:S1523 | MEDIUM | routes/userProfile.ts | 36 | Dynamically executing code is security-sensitive | Make sure that this dynamic injection or execution of code is safe. | 26cbd0e4-3b05-4807-9ee0-91b8b1e50f96 | TO_REVIEW |
typescript:S1523 | MEDIUM | test/cypress/e2e/contact.spec.ts | 258 | Dynamically executing code is security-sensitive | Make sure that this dynamic injection or execution of code is safe. | f4a66727-948d-4202-b63f-feecd7d04322 | TO_REVIEW |
typescript:S2245 | MEDIUM | data/datacreator.ts | 226 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | eddc69c3-e195-439b-85bf-d1f50fb60285 | TO_REVIEW |
typescript:S2245 | MEDIUM | data/datacreator.ts | 244 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | 0d920e34-dea8-4f7f-8a96-82af19c1f143 | TO_REVIEW |
typescript:S2245 | MEDIUM | data/datacreator.ts | 292 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | b30be05b-2281-4cdf-9dda-6ba16fd00d82 | TO_REVIEW |
typescript:S2245 | MEDIUM | data/datacreator.ts | 670 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | 978c07eb-6d8f-484a-b93c-86ca7a2da913 | TO_REVIEW |
typescript:S2245 | MEDIUM | frontend/src/app/code-snippet/code-snippet.component.ts | 146 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | ae247e5a-7795-4696-a6c6-e83b6ff90a6d | TO_REVIEW |
typescript:S2245 | MEDIUM | lib/insecurity.ts | 55 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | b8f286ad-bdc0-4977-bc50-13ce9d997ab5 | TO_REVIEW |
typescript:S2245 | MEDIUM | routes/captcha.ts | 15 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | 357fcdbd-0101-494f-8eff-aeecc3bddbbd | TO_REVIEW |
typescript:S2245 | MEDIUM | routes/captcha.ts | 16 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | 2582d700-30b1-4b32-bceb-c996d8493384 | TO_REVIEW |
typescript:S2245 | MEDIUM | routes/captcha.ts | 17 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | b14f0482-13bf-43c6-9572-8e807580af0b | TO_REVIEW |
typescript:S2245 | MEDIUM | routes/captcha.ts | 19 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | ec7445a7-c168-4674-9aea-24721f152b4d | TO_REVIEW |
typescript:S2245 | MEDIUM | routes/captcha.ts | 20 | Using pseudorandom number generators (PRNGs) is security-sensitive | Make sure that using this pseudorandom number generator is safe here. | cb2f4a40-d5bc-403e-9d7a-1668bad6a376 | TO_REVIEW |
docker:S5332 | LOW | test/smoke/Dockerfile | 7 | Using clear-text protocols is security-sensitive | Make sure that using clear-text protocols is safe here. | 699a43bb-aeec-49b2-a461-157e2e22a146 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_1.ts | 6 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 01e6bffd-e1ca-45d6-b9ce-86a0a7a7ae0c | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_1.ts | 7 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 46c53e2e-0fae-43fc-ade7-b52245571261 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_1.ts | 9 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | f7f6ab05-0027-4960-a2cf-36adb1626380 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_2.ts | 6 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 8383d3e0-2748-4691-8ae7-95bbe9da7c02 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_2.ts | 7 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | cd50adb0-ebb1-448b-b83e-8eb393b83cce | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_2.ts | 9 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 55b3a09d-9711-4b58-804c-0f12b5f978ae | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_3.ts | 6 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 69ad3605-4861-4946-8f75-ed6c9755f6d6 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_3.ts | 7 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 459479d6-a161-4bcf-b16d-ef22ef886196 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_3.ts | 9 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 41bb1388-815d-49b3-abfe-7e9336c30d7d | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_4_correct.ts | 6 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 27cd864e-b45c-489d-90fb-037d240f8441 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_4_correct.ts | 7 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | bde0d0f4-9ac6-4622-a448-3301532b3b71 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectChallenge_4_correct.ts | 9 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 148c7a46-aa32-4e9f-b8ca-f57211465328 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_1.ts | 5 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 4ae541ae-90e8-43ea-a6ff-9b89f411116f | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_1.ts | 6 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 68d773c4-e04d-4f5e-ac36-1c589f9f133b | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_1.ts | 8 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | cfbaa0f5-8e10-47a9-9441-594c46bd0e2b | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_2.ts | 5 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | ffa0ef0a-7924-4971-bdef-5a820e6e831a | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_2.ts | 6 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | cb0e9869-781d-4de8-ace7-bac6c9b8967e | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_2.ts | 8 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | ae5e59f4-410c-4cea-af02-fd92973158d3 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_3_correct.ts | 3 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | f094f4de-d70d-4a5f-ae22-5c45cfddef9e | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_3_correct.ts | 4 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | b28750a9-bc01-49b3-912a-c46d76af0164 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_3_correct.ts | 6 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | ca4e0679-06d1-4ad8-9188-78b58b74d43f | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_4.ts | 5 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 4126602b-819b-4770-b529-a8643ec80930 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_4.ts | 6 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | c669ad9a-572e-46cd-94f9-7e3f77d37b07 | TO_REVIEW |
typescript:S5332 | LOW | data/static/codefixes/redirectCryptoCurrencyChallenge_4.ts | 8 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 0a668f9b-b886-43fa-8ac2-4dcb00fbaec7 | TO_REVIEW |
typescript:S5332 | LOW | frontend/src/app/order-completion/order-completion.component.spec.ts | 136 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | e6b5f5c4-915b-44e8-937b-b84e7ed0e25c | TO_REVIEW |
typescript:S5332 | LOW | frontend/src/app/score-board-legacy/score-board-legacy.component.spec.ts | 281 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 8d002976-8481-41c9-946b-9d081b08dd69 | TO_REVIEW |
typescript:S5332 | LOW | frontend/src/app/score-board-legacy/score-board-legacy.component.spec.ts | 290 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | d06a05dc-b537-44e9-bdf3-5b8aa5e7efd1 | TO_REVIEW |
typescript:S5332 | LOW | lib/insecurity.ts | 135 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 9dd6a928-c321-471e-8c4b-d602c990f3ee | TO_REVIEW |
typescript:S5332 | LOW | lib/insecurity.ts | 136 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 4fc39eaf-cc7f-4582-8880-7be2cc4d391f | TO_REVIEW |
typescript:S5332 | LOW | lib/insecurity.ts | 138 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 63a9cd6e-1e4f-44f0-8b4e-278c1f144253 | TO_REVIEW |
typescript:S5332 | LOW | test/cypress/e2e/profile.spec.ts | 74 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 0eb1deeb-0b21-4a5d-9ca1-fc6e28c40c41 | TO_REVIEW |
typescript:S5332 | LOW | test/cypress/e2e/profile.spec.ts | 107 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | a8f2a3c7-36ff-4fdc-abb8-1e4ed533c762 | TO_REVIEW |
typescript:S5332 | LOW | test/server/redirectSpec.ts | 42 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 5ba9bcbb-bd32-464b-81e0-e904b43237a8 | TO_REVIEW |
typescript:S5332 | LOW | test/server/redirectSpec.ts | 78 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | ae5a188a-889a-47b0-a29d-64884fbebb34 | TO_REVIEW |
typescript:S5332 | LOW | test/server/utilsSpec.ts | 36 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | d892f53c-8f1e-429d-b0c6-6047b364479b | TO_REVIEW |
typescript:S5332 | LOW | test/server/utilsSpec.ts | 40 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 6b485e4a-c64d-42d8-8d0d-baff1bddabc6 | TO_REVIEW |
typescript:S5332 | LOW | test/server/verifySpec.ts | 86 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 4e91181e-66b8-4fa6-86a1-01e1ad53e5eb | TO_REVIEW |
typescript:S5332 | LOW | test/server/verifySpec.ts | 95 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 4eaeffd7-9d89-47d8-b717-2bf364f3c304 | TO_REVIEW |
typescript:S5332 | LOW | test/server/verifySpec.ts | 104 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 61274224-329e-4eb3-b2da-b56736830dac | TO_REVIEW |
typescript:S5332 | LOW | test/server/verifySpec.ts | 113 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | c6c6c8f9-4f7a-4599-a090-084ff746bcb3 | TO_REVIEW |
typescript:S5332 | LOW | test/server/verifySpec.ts | 123 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 41c8b68b-3062-423b-8ac3-a2a44cf008d4 | TO_REVIEW |
typescript:S5332 | LOW | test/server/verifySpec.ts | 132 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 1ca49860-9a54-4e8f-9b6b-131de5b99b3f | TO_REVIEW |
typescript:S5332 | LOW | test/server/verifySpec.ts | 141 | Using clear-text protocols is security-sensitive | Using http protocol is insecure. Use https instead. | 2ba5fc95-7a77-42b3-b306-0a9fa13074ec | TO_REVIEW |
typescript:S4507 | LOW | server.ts | 634 | Delivering code in production with debug features activated is security-sensitive | Make sure this debug feature is deactivated before delivering the code in production. | 02c1ceb2-3a93-4067-9337-7c142470237a | TO_REVIEW |
typescript:S5122 | LOW | server.ts | 164 | Having a permissive Cross-Origin Resource Sharing policy is security-sensitive | Make sure that enabling CORS is safe here. | ee3ca45e-e65d-4c6a-9296-31b507da6092 | TO_REVIEW |
typescript:S5122 | LOW | server.ts | 165 | Having a permissive Cross-Origin Resource Sharing policy is security-sensitive | Make sure that enabling CORS is safe here. | 52200940-eb3d-4beb-9dd8-8a2d9f3bae9f | TO_REVIEW |
typescript:S1313 | LOW | test/api/loginApiSpec.ts | 253 | Using hardcoded IP addresses is security-sensitive | Make sure using a hardcoded IP address 1.2.3.4 is safe here. | 40f10b1c-4c13-4fa2-8550-f0db993804d8 | TO_REVIEW |
typescript:S1313 | LOW | test/api/loginApiSpec.ts | 257 | Using hardcoded IP addresses is security-sensitive | Make sure using a hardcoded IP address 1.2.3.4 is safe here. | f416cdc8-8a31-4fc9-843a-494e91dd8cca | TO_REVIEW |
typescript:S1313 | LOW | test/server/utilsSpec.ts | 14 | Using hardcoded IP addresses is security-sensitive | Make sure using a hardcoded IP address 2001:0db8:85a3:0000:0000:8a2e:0370:7334 is safe here. | 06969d69-7903-401f-9a0e-43fc62fd40af | TO_REVIEW |
typescript:S1313 | LOW | test/server/utilsSpec.ts | 14 | Using hardcoded IP addresses is security-sensitive | Make sure using a hardcoded IP address 2001:0db8:85a3:0000:0000:8a2e:0370:7334 is safe here. | 7c613c2d-75b5-4744-8b0b-c6b34de2a7fc | TO_REVIEW |
typescript:S1313 | LOW | test/server/utilsSpec.ts | 18 | Using hardcoded IP addresses is security-sensitive | Make sure using a hardcoded IP address 0:0:0:0:0:ffff:7f00:1 is safe here. | b4c693f4-52c2-49b7-afa9-b97fdb5b93d6 | TO_REVIEW |
typescript:S1313 | LOW | test/server/utilsSpec.ts | 18 | Using hardcoded IP addresses is security-sensitive | Make sure using a hardcoded IP address 0:0:0:0:0:ffff:7f00:1 is safe here. | fe84cc9f-ebcf-492d-bfdd-e59fc4c973a4 | TO_REVIEW |
typescript:S1313 | LOW | test/server/utilsSpec.ts | 26 | Using hardcoded IP addresses is security-sensitive | Make sure using a hardcoded IP address ::ffff:192.0.2.128 is safe here. | 079b14d3-1721-49b5-99c6-737dcddf4bbb | TO_REVIEW |
typescript:S4790 | LOW | lib/insecurity.ts | 43 | Using weak hashing algorithms is security-sensitive | Make sure this weak hash algorithm is not used in a sensitive context here. | 74bbbeb3-e80a-4a27-b5b7-59aa26b52229 | TO_REVIEW |
Web:S5725 | LOW | frontend/src/index.html | 15 | Using remote artifacts without integrity checks is security-sensitive | Make sure not using resource integrity feature is safe here. | 60365ae7-e93c-42e5-9885-d93f48ea726c | TO_REVIEW |
Web:S5725 | LOW | frontend/src/index.html | 16 | Using remote artifacts without integrity checks is security-sensitive | Make sure not using resource integrity feature is safe here. | 740eedf1-9974-4a15-9766-27e2fa731dbe | TO_REVIEW |
docker:S6500 | LOW | Dockerfile | 25 | Automatically installing recommended packages is security-sensitive | Make sure automatically installing recommended packages is safe here. | 5394f416-6e28-48d9-a2aa-b6e3a1bb0437 | TO_REVIEW |
javascript:S4790 | LOW | Gruntfile.js | 76 | Using weak hashing algorithms is security-sensitive | Make sure this weak hash algorithm is not used in a sensitive context here. | 86dfb061-1696-4e55-b91d-f4c6eca3de1c | TO_REVIEW |
docker:S6505 | LOW | Dockerfile | 4 | Allowing shell scripts execution during package installation is security-sensitive | Omitting --ignore-scripts can lead to the execution of shell scripts. Make sure it is safe here. | 04a024e5-9319-4669-951d-4db55cbf6df2 | TO_REVIEW |
docker:S6505 | LOW | Dockerfile | 5 | Allowing shell scripts execution during package installation is security-sensitive | Omitting --ignore-scripts can lead to the execution of shell scripts. Make sure it is safe here. | 9e455705-d5e7-4c3f-8ad1-457c4953c10d | TO_REVIEW |
docker:S6505 | LOW | Dockerfile | 19 | Allowing shell scripts execution during package installation is security-sensitive | Omitting --ignore-scripts can lead to the execution of shell scripts. Make sure it is safe here. | 43ae4683-316f-4f6d-b2d4-dcbe69cdd97d | TO_REVIEW |
Web:S5148 | LOW | frontend/src/app/nft-unlock/nft-unlock.component.html | 63 | Authorizing an opened window to access back to the originating window is security-sensitive | Make sure not using rel="noopener" is safe here. | fa94685c-3731-495f-95fc-75c7965e0d66 | TO_REVIEW |
Web:S5148 | LOW | frontend/src/app/nft-unlock/nft-unlock.component.html | 80 | Authorizing an opened window to access back to the originating window is security-sensitive | Make sure not using rel="noopener" is safe here. | 01011d38-c7cf-4b69-b5a4-d644d5094e09 | TO_REVIEW |
Rule | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
azureresourcemanager:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in DatabasesCode examplesThe following code samples are equivalent For Azure Database for MySQL servers, Azure Database for PostgreSQL servers, and Azure Database for MariaDB servers. For all of these, there is no minimal TLS version enforced by default. Noncompliant code example{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DBforMySQL/servers", "apiVersion": "2017-12-01", "name": "example", "properties": { "minimalTlsVersion": "TLS1_0" } } ] } resource mysqlDbServer 'Microsoft.DBforMySQL/servers@2017-12-01' = { name: 'example' properties: { minimalTlsVersion: 'TLS1_0' // Noncompliant } } Compliant solution{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DBforMySQL/servers", "apiVersion": "2017-12-01", "name": "example", "properties": { "minimalTlsVersion": "TLS1_2" } } ] } resource mysqlDbServer 'Microsoft.DBforMySQL/servers@2017-12-01' = { name: 'example' properties: { minimalTlsVersion: 'TLS1_2' } } How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards |
||||||||||||
azureresourcemanager:S6329 |
Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption. Depending on the component, inbound access from the Internet can be enabled via:
Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident. This decision increases the likelihood of attacks on the organization, such as:
Ask Yourself WhetherThis cloud resource:
There is a risk if you answered no to any of those questions. Recommended Secure Coding PracticesAvoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites. Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components. The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address. Sensitive Code ExampleUsing resource exampleSite 'Microsoft.Web/sites@2020-12-01' = { name: 'example-site' properties: { publicNetworkAccess: 'Enabled' } } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2020-12-01", "name": "example-site", "properties": { "siteConfig": { "publicNetworkAccess": "Enabled" } } } ] } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2020-12-01", "name": "example", "resources": [ { "type": "config", "apiVersion": "2020-12-01", "name": "example-config", "properties": { "publicNetworkAccess": "Enabled" } } ] } ] } Using IP address ranges to control access to resources: resource exampleFirewall 'Microsoft.Sql/servers/firewallRules@2014-04-01' = { name: 'example-firewall' properties: { startIpAddress: '0.0.0.0' endIpAddress: '255.255.255.255' } } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Sql/servers/firewallRules", "apiVersion": "2014-04-01", "name": "example-firewall", "properties": { "startIpAddress": "0.0.0.0", "endIpAddress": "255.255.255.255" } } ] } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Sql/servers", "apiVersion": "2014-04-01", "name": "example-database", "resources": [ { "type": "firewallRules", "apiVersion": "2014-04-01", "name": "example-firewall", "properties": { "startIpAddress": "0.0.0.0", "endIpAddress": "255.255.255.255" } } ] } ] } Compliant SolutionUsing resource exampleSite 'Microsoft.Web/sites@2020-12-01' = { name: 'example-site' properties: { publicNetworkAccess: 'Disabled' } } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2020-12-01", "name": "example-site", "properties": { "siteConfig": { "publicNetworkAccess": "Disabled" } } } ] } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2020-12-01", "name": "example-site", "resources": [ { "type": "config", "apiVersion": "2020-12-01", "name": "example-config", "properties": { "publicNetworkAccess": "Disabled" } } ] } ] } Using IP address ranges to control access to resources: resource exampleFirewall 'Microsoft.Sql/servers/firewallRules@2014-04-01' = { name: 'example-firewall' properties: { startIpAddress: '192.168.0.0' endIpAddress: '192.168.255.255' } } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Sql/servers/firewallRules", "apiVersion": "2014-04-01", "name": "example-firewall", "properties": { "startIpAddress": "192.168.0.0", "endIpAddress": "192.168.255.255" } } ] } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Sql/servers", "apiVersion": "2014-04-01", "name": "example-database", "resources": [ { "type": "firewallRules", "apiVersion": "2014-04-01", "name": "example-firewall", "properties": { "startIpAddress": "192.168.0.0", "endIpAddress": "192.168.255.255" } } ] } ] } See
|
||||||||||||
azureresourcemanager:S6378 |
Disabling Managed Identities can reduce an organization’s ability to protect itself against configuration faults and credential leaks. Authenticating via managed identities to an Azure resource solely relies on an API call with a non-secret token. The process is inner to Azure: secrets used by Azure are not even accessible to end-users. In typical scenarios without managed identities, the use of credentials can lead to mistakenly leaving them in code bases. In addition, configuration faults may also happen when storing these values or assigning them permissions. By transparently taking care of the Azure Active Directory authentication, Managed Identities allow getting rid of day-to-day credentials management. Ask Yourself WhetherThe resource:
There is a risk if you answered yes to all of those questions. Recommended Secure Coding PracticesEnable the Managed Identities capabilities of this Azure resource. If supported, use a System-Assigned managed identity, as:
Alternatively, User-Assigned Managed Identities can also be used but don’t guarantee the properties listed above. Sensitive Code ExampleUsing ARM templates: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.ApiManagement/service", "apiVersion": "2022-09-01-preview", "name": "apiManagementService" } ] } Using Bicep: resource sensitiveApiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = { name: 'apiManagementService' // Sensitive: no Managed Identity is defined } Compliant SolutionUsing ARM templates: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.ApiManagement/service", "apiVersion": "2022-09-01-preview", "name": "apiManagementService", "identity": { "type": "SystemAssigned" } } ] } Using Bicep: resource sensitiveApiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = { name: 'apiManagementService' identity: { type: 'SystemAssigned' } } See |
||||||||||||
azureresourcemanager:S6648 |
Azure Resource Manager templates define parameters as a way to reuse templates in different environments. Secure parameters (secure strings and secure objects) should not be assigned a default value. Why is this an issue?Parameters with the type Secure parameters can be assigned a default value which will be used if the parameter is not supplied. This default value is not protected and is stored in cleartext in the deployment history. What is the potential impact?If the default value contains a secret, it will be disclosed to all accounts that have read access to the deployment history. How to fix it in ARM templatesCode examplesNoncompliant code example{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "secretValue": { "type": "securestring", "defaultValue": "S3CR3T" } } } Compliant solution{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "secretValue": { "type": "securestring" } } } ResourcesDocumentationStandards |
||||||||||||
azureresourcemanager:S6656 |
When using nested deployments in Azure, template expressions can be evaluated within the scope of the parent template or the scope of the nested template. If such a template expression evaluates a secure value of the parent template, it is possible to expose this value in the deployment history. Why is this an issue?Parameters with the type When used in nested deployments, however, it is possible to embed secure parameters in such a way they can be visible afterward. What is the potential impact?If the nested deployment contains a secure parameter in this way, then the value of this parameter may be readable in the deployment history. This can lead to important credentials being leaked to unauthorized accounts. How to fix it in ARM TemplatesBy setting Code examplesNoncompliant code example{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "adminUsername": { "type": "securestring", "defaultValue": "[newGuid()]" } }, "resources": [ { "name": "example", "type": "Microsoft.Resources/deployments", "apiVersion": "2022-09-01", "properties": { "mode": "Incremental", "template": { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "osProfile": { "adminUsername": "[parameters('adminUsername')]" } } } ] } } } ] } Compliant solution{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Resources/deployments", "apiVersion": "2022-09-01", "properties": { "expressionEvaluationOptions": { "scope": "Inner" }, "mode": "Incremental", "template": { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "adminUsername": { "type": "securestring", "defaultValue": "[newGuid()]" } }, "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "osProfile": { "adminUsername": "[parameters('adminUsername')]" } } } ] } } } ] } ResourcesDocumentation
Standards |
||||||||||||
azureresourcemanager:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code ExampleFor Microsoft.Web/sites: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "name": "example", "apiVersion": "2022-09-01", "properties": { "httpsOnly": false } } ] } resource symbolicname 'Microsoft.Web/sites@2022-03-01' = { properties: { httpsOnly: false // Sensitive } } For Microsoft.Web/sites/config: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites/config", "name": "sites/example", "apiVersion": "2022-09-01", "properties": { "ftpsState": "AllAllowed" } } ] } resource symbolicname 'Microsoft.Web/sites/config@2022-09-01' = { properties: { ftpsState: 'AllAllowed' // Sensitive } } For Microsoft.Storage/storageAccounts: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Storage/storageAccounts", "name": "example", "apiVersion": "2022-09-01", "properties": { "supportsHttpsTrafficOnly": false } } ] } resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = { properties: { supportsHttpsTrafficOnly: false // Sensitive } } For Microsoft.ApiManagement/service/apis: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.ApiManagement/service/apis", "name": "service/example", "apiVersion": "2022-08-01", "properties": { "protocols": ["http"] } } ] } resource symbolicname 'Microsoft.ApiManagement/service/apis@2022-08-01' = { properties: { protocols: ['http'] // Sensitive } } For Microsoft.Cdn/profiles/endpoints: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Cdn/profiles/endpoints", "name": "profiles/example", "apiVersion": "2021-06-01", "properties": { "isHttpAllowed": true } } ] } resource symbolicname 'Microsoft.Cdn/profiles/endpoints@2021-06-01' = { properties: { isHttpAllowed: true // Sensitive } } For Microsoft.Cache/redisEnterprise/databases: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Cache/redisEnterprise/databases", "name": "redisEnterprise/example", "apiVersion": "2022-01-01", "properties": { "clientProtocol": "Plaintext" } } ] } resource symbolicname 'Microsoft.Cache/redisEnterprise/databases@2022-01-01' = { properties: { clientProtocol: 'Plaintext' // Sensitive } } For Microsoft.DBforMySQL/servers, Microsoft.DBforMariaDB/servers, and Microsoft.DBforPostgreSQL/servers: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DBforMySQL/servers", "name": "example", "apiVersion": "2017-12-01", "properties": { "sslEnforcement": "Disabled" } } ] } resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = { properties: { sslEnforcement: 'Disabled' // Sensitive } } Compliant SolutionFor Microsoft.Web/sites: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "name": "example", "apiVersion": "2022-09-01", "properties": { "httpsOnly": true } } ] } resource symbolicname 'Microsoft.Web/sites@2022-03-01' = { properties: { httpsOnly: true } } For Microsoft.Web/sites/config: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites/config", "name": "sites/example", "apiVersion": "2022-09-01", "properties": { "ftpsState": "FtpsOnly" } } ] } resource symbolicname 'Microsoft.Web/sites/config@2022-09-01' = { properties: { ftpsState: 'FtpsOnly' } } For Microsoft.Storage/storageAccounts: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Storage/storageAccounts", "name": "example", "apiVersion": "2022-09-01", "properties": { "supportsHttpsTrafficOnly": true } } ] } resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = { properties: { supportsHttpsTrafficOnly: true } } For Microsoft.ApiManagement/service/apis: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.ApiManagement/service/apis", "name": "service/example", "apiVersion": "2022-08-01", "properties": { "protocols": ["https"] } } ] } resource symbolicname 'Microsoft.ApiManagement/service/apis@2022-08-01' = { properties: { protocols: ['https'] } } For Microsoft.Cdn/profiles/endpoints: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Cdn/profiles/endpoints", "name": "profiles/example", "apiVersion": "2021-06-01", "properties": { "isHttpAllowed": false } } ] } resource symbolicname 'Microsoft.Cdn/profiles/endpoints@2021-06-01' = { properties: { isHttpAllowed: false } } For Microsoft.Cache/redisEnterprise/databases: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Cache/redisEnterprise/databases", "name": "redisEnterprise/example", "apiVersion": "2022-01-01", "properties": { "clientProtocol": "Encrypted" } } ] } resource symbolicname 'Microsoft.Cache/redisEnterprise/databases@2022-01-01' = { properties: { clientProtocol: 'Encrypted' } } For Microsoft.DBforMySQL/servers, Microsoft.DBforMariaDB/servers, and Microsoft.DBforPostgreSQL/servers: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DBforMySQL/servers", "name": "example", "apiVersion": "2017-12-01", "properties": { "sslEnforcement": "Enabled" } } ] } resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = { properties: { sslEnforcement: 'Enabled' } } See |
||||||||||||
azureresourcemanager:S6388 |
Using unencrypted cloud storages can lead to data exposure. In the case that adversaries gain physical access to the storage medium they are able to access unencrypted information. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt cloud storages that contain sensitive information. Sensitive Code ExampleFor Microsoft.AzureArcData/sqlServerInstances/databases: Disabled encryption on SQL service instance database: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "databases/example", "type": "Microsoft.AzureArcData/sqlServerInstances/databases", "apiVersion": "2023-03-15-preview", "properties": { "databaseOptions": { "isEncrypted": false } } } ] } resource symbolicname 'Microsoft.AzureArcData/sqlServerInstances/databases@2023-03-15-preview' = { properties: { databaseOptions: { isEncrypted: false } } } For Microsoft.Compute/disks, encryption is disabled by default. For Microsoft.Compute/snapshots: Disabled disk encryption with settings collection: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/snapshots", "apiVersion": "2022-07-02", "properties": { "encryptionSettingsCollection": { "enabled": false } } } ] } resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = { properties: { encryptionSettingsCollection: { enabled: false } } } For Microsoft.Compute/virtualMachines: Disabled encryption at host level: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "securityProfile": { "encryptionAtHost": false } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { securityProfile: { encryptionAtHost: false } } } Disabled encryption for managed disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "storageProfile": { "dataDisks": [ { "id": "myDiskId" } ] } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { storageProfile: { dataDisks: [ { name: 'myDisk' } ] } } } Disabled encryption for OS disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "storageProfile": { "osDisk": { "encryptionSettings": { "enabled": false } } } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { storageProfile: { osDisk: { name: 'myDisk' encryptionSettings: { enabled: false } } } } } Disabled encryption for OS managed disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "storageProfile": { "osDisk": { "managedDisk": { "id": "myDiskId" } } } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { storageProfile: { osDisk: { name: 'myDisk' managedDisk: { id: 'myDiskId' } } } } } For Microsoft.Compute/virtualMachineScaleSets: Disabled encryption at host level: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachineScaleSets", "apiVersion": "2022-11-01", "properties": { "virtualMachineProfile": { "securityProfile": { "encryptionAtHost": false } } } } ] } resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = { properties: { virtualMachineProfile: { securityProfile: { encryptionAtHost: false } } } } Disabled encryption for data disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachineScaleSets", "apiVersion": "2022-11-01", "properties": { "virtualMachineProfile": { "storageProfile": { "dataDisks": [ { "name": "myDataDisk" } ] } } } } ] } resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = { properties: { virtualMachineProfile: { storageProfile: { dataDisks: [ { name: 'myDataDisk' } ] } } } } Disabled encryption for OS disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachineScaleSets", "apiVersion": "2022-11-01", "properties": { "virtualMachineProfile": { "storageProfile": { "osDisk": { "name": "myOsDisk" } } } } } ] } resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = { properties: { virtualMachineProfile: { storageProfile: { osDisk: { name: 'myOsDisk' } } } } } For Microsoft.ContainerService/managedClusters: Disabled encryption at host and set the disk encryption set ID: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.ContainerService/managedClusters", "apiVersion": "2023-03-02-preview", "properties": { "agentPoolProfiles": [ { "enableEncryptionAtHost": false } ] } } ] } resource symbolicname 'Microsoft.ContainerService/managedClusters@2023-03-02-preview' = { properties: { agentPoolProfiles: [ { enableEncryptionAtHost: false } ] } } For Microsoft.DataLakeStore/accounts: Disabled encryption for Data Lake Store: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.DataLakeStore/accounts", "apiVersion": "2016-11-01", "properties": { "encryptionState": "Disabled" } } ] } resource symbolicname 'Microsoft.DataLakeStore/accounts@2016-11-01' = { properties: { encryptionState: 'Disabled' } } For Microsoft.DBforMySQL/servers: Disabled infrastructure double encryption for MySQL server: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.DBforMySQL/servers", "apiVersion": "2017-12-01", "properties": { "infrastructureEncryption": "Disabled" } } ] } resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = { properties: { infrastructureEncryption: 'Disabled' } } For Microsoft.DBforPostgreSQL/servers: Disabled infrastructure double encryption for PostgreSQL server: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.DBforPostgreSQL/servers", "apiVersion": "2017-12-01", "properties": { "infrastructureEncryption": "Disabled" } } ] } resource symbolicname 'Microsoft.DBforPostgreSQL/servers@2017-12-01' = { properties: { infrastructureEncryption: 'Disabled' } } For Microsoft.DocumentDB/cassandraClusters/dataCenters: Disabled encryption for a Cassandra Cluster datacenter’s managed disk and backup: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "cassandraClusters/example", "type": "Microsoft.DocumentDB/cassandraClusters/dataCenters", "apiVersion": "2023-04-15", "properties": { "diskCapacity": 4 } } ] } resource symbolicname 'Microsoft.DocumentDB/cassandraClusters/dataCenters@2023-04-15' = { name: 'string' parent: parent properties: { diskCapacity: 4 } } For Microsoft.HDInsight/clusters: Disabled encryption for data disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.HDInsight/clusters", "apiVersion": "2021-06-01", "properties": { "computeProfile": { "roles": [ { "encryptDataDisks": false } ] } } } ] } resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = { properties: { computeProfile: { roles: [ { encryptDataDisks: false } ] } } } Disabled encryption for data disk at application level: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "clusters/example", "type": "Microsoft.HDInsight/clusters/applications", "apiVersion": "2021-06-01", "properties": { "computeProfile": { "roles": [ { "encryptDataDisks": false } ] } } } ] } resource symbolicname 'Microsoft.HDInsight/clusters/applications@2021-06-01' = { properties: { computeProfile: { roles: [ { encryptDataDisks: false } ] } } } Disabled encryption for resource disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.HDInsight/clusters", "apiVersion": "2021-06-01", "properties": { "diskEncryptionProperties": { "encryptionAtHost": false } } } ] } resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = { properties: { diskEncryptionProperties: { encryptionAtHost: false } } } Disabled encryption for disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Kusto/clusters", "apiVersion": "2022-12-29", "properties": { "enableDiskEncryption": false } } ] } resource symbolicname 'Microsoft.Kusto/clusters@2022-12-29' = { properties: { enableDiskEncryption: false } } For Microsoft.RecoveryServices/vaults: Disabled encryption for disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.RecoveryServices/vaults", "apiVersion": "2023-01-01", "properties": { "encryption": { "infrastructureEncryption": "Disabled" } } } ] } resource symbolicname 'Microsoft.RecoveryServices/vaults@2023-01-01' = { properties: { encryption: { infrastructureEncryption: 'Disabled' } } } Disabled encryption on infastructure for backup: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "vaults/example", "type": "Microsoft.RecoveryServices/vaults/backupEncryptionConfigs", "apiVersion": "2023-01-01", "properties": { "infrastructureEncryptionState": "Disabled" } } ] } resource symbolicname 'Microsoft.RecoveryServices/vaults/backupEncryptionConfigs@2023-01-01' = { properties: { encryptionAtRestType: '{CustomerManaged | MicrosoftManaged}' infrastructureEncryptionState: 'Disabled' } } For Microsoft.RedHatOpenShift/openShiftClusters: Disabled disk encryption for master profile and worker profiles: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.RedHatOpenShift/openShiftClusters", "apiVersion": "2022-09-04", "properties": { "masterProfile": { "encryptionAtHost": "Disabled" }, "workerProfiles": [ { "encryptionAtHost": "Disabled" } ] } } ] } resource symbolicname 'Microsoft.RedHatOpenShift/openShiftClusters@2022-09-04' = { properties: { masterProfile: { encryptionAtHost: 'Disabled' } workerProfiles: [ { encryptionAtHost: 'Disabled' } ] } } For Microsoft.SqlVirtualMachine/sqlVirtualMachines: Disabled encryption for SQL Virtual Machine: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines", "apiVersion": "2022-08-01-preview", "properties": { "autoBackupSettings": { "enableEncryption": false } } } ] } resource symbolicname 'Microsoft.SqlVirtualMachine/sqlVirtualMachines@2022-08-01-preview' = { properties: { autoBackupSettings: { enableEncryption: false } } } For Microsoft.Storage/storageAccounts: Disabled enforcing of infrastructure encryption for double encryption of data: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2022-09-01", "properties": { "encryption": { "requireInfrastructureEncryption": false } } } ] } resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = { properties: { encryption: { requireInfrastructureEncryption: false } } } For Microsoft.Storage/storageAccounts/encryptionScopes: Disabled enforcing of infrastructure encryption for double encryption of data at encryption scope level: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "storageAccounts/example", "type": "Microsoft.Storage/storageAccounts/encryptionScopes", "apiVersion": "2022-09-01", "properties": { "requireInfrastructureEncryption": false } } ] } resource symbolicname 'Microsoft.Storage/storageAccounts/encryptionScopes@2022-09-01' = { properties: { requireInfrastructureEncryption: false } } Compliant SolutionFor Microsoft.AzureArcData/sqlServerInstances/databases: Enabled encryption on SQL service instance database: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "databases/example", "type": "Microsoft.AzureArcData/sqlServerInstances/databases", "apiVersion": "2023-03-15-preview", "properties": { "databaseOptions": { "isEncrypted": true } } } ] } resource symbolicname 'Microsoft.AzureArcData/sqlServerInstances/databases@2023-03-15-preview' = { properties: { databaseOptions: { isEncrypted: true } } } Enabled encryption for managed disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/disks", "apiVersion": "2022-07-02", "properties": { "encryption": { "diskEncryptionSetId": "string", "type": "string" } } } ] } resource symbolicname 'Microsoft.Compute/disks@2022-07-02' = { properties: { encryption: { diskEncryptionSetId: 'string' type: 'string' } } } Enabled encryption through setting encryptionSettingsCollection: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Compute/disks", "apiVersion": "2022-07-02", "properties": { "encryptionSettingsCollection": { "enabled": true, "encryptionSettings": [ { "diskEncryptionKey": { "secretUrl": "string", "sourceVault": { "id": "string" } } } ] } } } ] } resource symbolicname 'Microsoft.Compute/disks@2022-07-02' = { properties: { encryptionSettingsCollection: { enabled: true encryptionSettings: [ { diskEncryptionKey: { secretUrl: 'string' sourceVault: { id: 'string' } } } ] } } } Enabled encryption through a security profile for an OS disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Compute/disks", "apiVersion": "2022-07-02", "properties": { "securityProfile": { "secureVMDiskEncryptionSetId": "string", "securityType": "{'ConfidentialVM_DiskEncryptedWithCustomerKey' | 'ConfidentialVM_DiskEncryptedWithPlatformKey' | 'ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey' | 'TrustedLaunch'}" } } } ] } resource symbolicname 'Microsoft.Compute/disks@2022-07-02' = { properties: { securityProfile: { secureVMDiskEncryptionSetId: 'string' securityType: '{ConfidentialVM_DiskEncryptedWithCustomerKey | ConfidentialVM_DiskEncryptedWithPlatformKey | ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey | TrustedLaunch}' } } } For Microsoft.Compute/snapshots: Enabled disk encryption for snapshot: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/snapshots", "apiVersion": "2022-07-02", "properties": { "encryption": { "diskEncryptionSetId": "string", "type": "{'EncryptionAtRestWithCustomerKey' | 'EncryptionAtRestWithPlatformAndCustomerKeys' | 'EncryptionAtRestWithPlatformKey'}" } } } ] } resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = { properties: { encryption: { diskEncryptionSetId: 'string' type: '{EncryptionAtRestWithCustomerKey | EncryptionAtRestWithPlatformAndCustomerKeys | EncryptionAtRestWithPlatformKey}' } } } Enabled disk encryption with settings collection: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/snapshots", "apiVersion": "2022-07-02", "properties": { "encryptionSettingsCollection": { "enabled": true, "encryptionSettings": [ { "diskEncryptionKey": { "secretUrl": "", "sourceVault": { "id": "string" } } } ], "encryptionSettingsVersion": "{'1.0' | '1.1'}" } } } ] } resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = { properties: { encryptionSettingsCollection: { enabled: true encryptionSettings: [ { diskEncryptionKey: { secretUrl: '' sourceVault: { id: 'string' } } } ] encryptionSettingsVersion: '{1.0 | 1.1}' } } } Enabled disk encryption through security profile: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/snapshots", "apiVersion": "2022-07-02", "properties": { "securityProfile": { "secureVMDiskEncryptionSetId": "string", "securityType": "{'ConfidentialVM_DiskEncryptedWithCustomerKey' | 'ConfidentialVM_DiskEncryptedWithPlatformKey' | 'ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey' |'TrustedLaunch'}" } } } ] } resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = { properties: { securityProfile: { secureVMDiskEncryptionSetId: 'string' securityType: '{ConfidentialVM_DiskEncryptedWithCustomerKey | ConfidentialVM_DiskEncryptedWithPlatformKey | ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey | TrustedLaunch}' } } } For Microsoft.Compute/virtualMachines: Enabled encryption at host level: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "securityProfile": { "encryptionAtHost": true } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { securityProfile: { encryptionAtHost: true } } } Enabled encryption for managed disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "storageProfile": { "dataDisks": [ { "id": "myDiskId", "managedDisk": { "diskEncryptionSet": { "id": "string" } } } ] } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { storageProfile: { dataDisks: [ { name: 'myDisk' managedDisk: { diskEncryptionSet: { id: 'string' } } } ] } } } Enabled encryption for managed disk through security profile: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "storageProfile": { "dataDisks": [ { "id": "myDiskId", "managedDisk": { "securityProfile": { "diskEncryptionSet": { "id": "string" } } } } ] } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { storageProfile: { dataDisks: [ { name: 'myDisk' managedDisk: { securityProfile: { diskEncryptionSet: { id: 'string' } } } } ] } } } Enabled encryption for OS disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "storageProfile": { "osDisk": { "encryptionSettings": { "enabled": true, "diskEncryptionKey": { "secretUrl": "string", "sourceVault": { "id": "string" } } } } } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { storageProfile: { osDisk: { name: 'myDisk' encryptionSettings: { enabled: true diskEncryptionKey: { secretUrl: 'string' sourceVault: { id: 'string' } } } } } } } Enabled encryption for OS managed disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "storageProfile": { "osDisk": { "managedDisk": { "id": "myDiskId", "diskEncryptionSet": { "id": "string" } } } } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { storageProfile: { osDisk: { name: 'myDisk' managedDisk: { id: 'myDiskId' diskEncryptionSet: { id: 'string' } } } } } } Enabled encryption for OS managed disk through security profile: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachines", "apiVersion": "2022-11-01", "properties": { "storageProfile": { "osDisk": { "managedDisk": { "securityProfile": { "diskEncryptionSet": { "id": "string" } } } } } } } ] } resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = { properties: { storageProfile: { osDisk: { name: 'myDisk' managedDisk: { id: 'myDiskId' securityProfile: { diskEncryptionSet: { id: 'string' } } } } } } } For Microsoft.Compute/virtualMachineScaleSets: Enabled encryption at host level: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachineScaleSets", "apiVersion": "2022-11-01", "properties": { "virtualMachineProfile": { "securityProfile": { "encryptionAtHost": true } } } } ] } resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = { properties: { virtualMachineProfile: { securityProfile: { encryptionAtHost: true } } } } Enabled encryption for data disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachineScaleSets", "apiVersion": "2022-11-01", "properties": { "virtualMachineProfile": { "storageProfile": { "dataDisks": [ { "name": "myDataDisk", "managedDisk": { "diskEncryptionSet": { "id": "string" } } } ] } } } } ] } resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = { properties: { virtualMachineProfile: { storageProfile: { dataDisks: [ { name: 'myDataDisk' managedDisk: { diskEncryptionSet: { id: 'string' } } } ] } } } } Enabled encryption for data disk through security profile: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachineScaleSets", "apiVersion": "2022-11-01", "properties": { "virtualMachineProfile": { "storageProfile": { "dataDisks": [ { "name": "myDataDisk", "managedDisk": { "securityProfile": { "diskEncryptionSet": { "id": "string" } } } } ] } } } } ] } resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = { properties: { virtualMachineProfile: { storageProfile: { dataDisks: [ { name: 'myDataDisk' managedDisk: { securityProfile: { diskEncryptionSet: { id: 'string' } } } } ] } } } } Enabled encryption for OS disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachineScaleSets", "apiVersion": "2022-11-01", "properties": { "virtualMachineProfile": { "storageProfile": { "osDisk": { "name": "myOsDisk", "managedDisk": { "diskEncryptionSet": { "id": "string" } } } } } } } ] } resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = { properties: { virtualMachineProfile: { storageProfile: { osDisk: { name: 'myOsDisk' managedDisk: { diskEncryptionSet: { id: 'string' } } } } } } } Enabled encryption for OS disk through security profile: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Compute/virtualMachineScaleSets", "apiVersion": "2022-11-01", "properties": { "virtualMachineProfile": { "storageProfile": { "osDisk": { "name": "myOsDisk", "managedDisk": { "securityProfile": { "diskEncryptionSet": { "id": "string" } } } } } } } } ] } resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = { properties: { virtualMachineProfile: { storageProfile: { osDisk: { name: 'myOsDisk' managedDisk: { securityProfile: { diskEncryptionSet: { id: 'string' } } } } } } } } For Microsoft.ContainerService/managedClusters: Enabled encryption at host and set the disk encryption set ID: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.ContainerService/managedClusters", "apiVersion": "2023-03-02-preview", "properties": { "agentPoolProfiles": [ { "enableEncryptionAtHost": true } ], "diskEncryptionSetID": "string" } } ] } resource symbolicname 'Microsoft.ContainerService/managedClusters@2023-03-02-preview' = { properties: { agentPoolProfiles: [ { enableEncryptionAtHost: true } ] diskEncryptionSetID: 'string' } } For Microsoft.DataLakeStore/accounts: Enabled encryption for Data Lake Store: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.DataLakeStore/accounts", "apiVersion": "2016-11-01", "properties": { "encryptionState": "Enabled" } } ] } resource symbolicname 'Microsoft.DataLakeStore/accounts@2016-11-01' = { properties: { encryptionState: 'Enabled' } } For Microsoft.DBforMySQL/servers: Enabled infrastructure double encryption for MySQL server: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.DBforMySQL/servers", "apiVersion": "2017-12-01", "properties": { "infrastructureEncryption": "Enabled" } } ] } resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = { properties: { infrastructureEncryption: 'Enabled' } } For Microsoft.DBforPostgreSQL/servers: Enabled infrastructure double encryption for PostgreSQL server: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.DBforPostgreSQL/servers", "apiVersion": "2017-12-01", "properties": { "infrastructureEncryption": "Enabled" } } ] } resource symbolicname 'Microsoft.DBforPostgreSQL/servers@2017-12-01' = { properties: { infrastructureEncryption: 'Enabled' } } For Microsoft.DocumentDB/cassandraClusters/dataCenters: Enabled encryption for a Cassandra Cluster datacenter’s managed disk and backup: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "cassandraClusters/example", "type": "Microsoft.DocumentDB/cassandraClusters/dataCenters", "apiVersion": "2023-04-15", "properties": { "diskCapacity": 4, "backupStorageCustomerKeyUri": "string", "managedDiskCustomerKeyUri": "string" } } ] } resource symbolicname 'Microsoft.DocumentDB/cassandraClusters/dataCenters@2023-04-15' = { name: 'string' parent: parent properties: { diskCapacity: 4 backupStorageCustomerKeyUri: 'string' managedDiskCustomerKeyUri: 'string' } } For Microsoft.HDInsight/clusters: Enabled encryption for data disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.HDInsight/clusters", "apiVersion": "2021-06-01", "properties": { "computeProfile": { "roles": [ { "encryptDataDisks": true } ] } } } ] } resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = { properties: { computeProfile: { roles: [ { encryptDataDisks: true } ] } } } Enabled encryption for data disk at application level: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "clusters/example", "type": "Microsoft.HDInsight/clusters/applications", "apiVersion": "2021-06-01", "properties": { "computeProfile": { "roles": [ { "encryptDataDisks": true } ] } } } ] } resource symbolicname 'Microsoft.HDInsight/clusters/applications@2021-06-01' = { properties: { computeProfile: { roles: [ { encryptDataDisks: true } ] } } } Enabled encryption for resource disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.HDInsight/clusters", "apiVersion": "2021-06-01", "properties": { "diskEncryptionProperties": { "encryptionAtHost": true } } } ] } resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = { properties: { diskEncryptionProperties: { encryptionAtHost: true } } } Enabled encryption for disk: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Kusto/clusters", "apiVersion": "2022-12-29", "properties": { "enableDiskEncryption": true } } ] } resource symbolicname 'Microsoft.Kusto/clusters@2022-12-29' = { properties: { enableDiskEncryption: true } } For Microsoft.RecoveryServices/vaults: Enabled encryption on infrastructure: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.RecoveryServices/vaults", "apiVersion": "2023-01-01", "properties": { "encryption": { "infrastructureEncryption": "Enabled" } } } ] } resource symbolicname 'Microsoft.RecoveryServices/vaults@2023-01-01' = { properties: { encryption: { infrastructureEncryption: 'Enabled' } } } Enabled encryption on infastructure for backup: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "vaults/example", "type": "Microsoft.RecoveryServices/vaults/backupEncryptionConfigs", "apiVersion": "2023-01-01", "properties": { "encryptionAtRestType": "{'CustomerManaged' | 'MicrosoftManaged'}", "infrastructureEncryptionState": "Enabled" } } ] } resource symbolicname 'Microsoft.RecoveryServices/vaults/backupEncryptionConfigs@2023-01-01' = { properties: { encryptionAtRestType: '{CustomerManaged | MicrosoftManaged}' infrastructureEncryptionState: 'Enabled' } } For Microsoft.RedHatOpenShift/openShiftClusters: Enabled disk encryption for master profile and worker profiles: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.RedHatOpenShift/openShiftClusters", "apiVersion": "2022-09-04", "properties": { "masterProfile": { "diskEncryptionSetId": "string", "encryptionAtHost": "Enabled" }, "workerProfiles": [ { "diskEncryptionSetId": "string", "encryptionAtHost": "Enabled" } ] } } ] } resource symbolicname 'Microsoft.RedHatOpenShift/openShiftClusters@2022-09-04' = { properties: { masterProfile: { diskEncryptionSetId: 'string' encryptionAtHost: 'Enabled' } workerProfiles: [ { diskEncryptionSetId: 'string' encryptionAtHost: 'Enabled' } ] } } For Microsoft.SqlVirtualMachine/sqlVirtualMachines: Enabled encryption for SQL Virtual Machine: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines", "apiVersion": "2022-08-01-preview", "properties": { "autoBackupSettings": { "enableEncryption": true, "password": "string" } } } ] } resource symbolicname 'Microsoft.SqlVirtualMachine/sqlVirtualMachines@2022-08-01-preview' = { properties: { autoBackupSettings: { enableEncryption: true password: 'string' } } } For Microsoft.Storage/storageAccounts: Enabled enforcing of infrastructure encryption for double encryption of data: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2022-09-01", "properties": { "encryption": { "requireInfrastructureEncryption": true } } } ] } resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = { properties: { encryption: { requireInfrastructureEncryption: true } } } For Microsoft.Storage/storageAccounts/encryptionScopes: Enabled enforcing of infrastructure encryption for double encryption of data at encryption scope level: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "storageAccounts/example", "type": "Microsoft.Storage/storageAccounts/encryptionScopes", "apiVersion": "2022-09-01", "properties": { "requireInfrastructureEncryption": true } } ] } resource symbolicname 'Microsoft.Storage/storageAccounts/encryptionScopes@2022-09-01' = { properties: { requireInfrastructureEncryption: true } } See |
||||||||||||
azureresourcemanager:S6321 |
Why is this an issue?Cloud platforms such as Azure support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound
traffic. What is the potential impact?Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system. Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system. How to fix itIt is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers. Code examplesNoncompliant code example{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "networkSecurityGroups/example", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "apiVersion": "2022-11-01", "properties": { "protocol": "*", "destinationPortRange": "*", "sourceAddressPrefix": "*", "access": "Allow", "direction": "Inbound" } } ] } resource securityRules 'Microsoft.Network/networkSecurityGroups/securityRules@2022-11-01' = { name: 'securityRules' properties: { direction: 'Inbound' access: 'Allow' protocol: '*' destinationPortRange: '*' sourceAddressPrefix: '*' } } Compliant solution{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "networkSecurityGroups/example", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "apiVersion": "2022-11-01", "properties": { "protocol": "*", "destinationPortRange": "22", "sourceAddressPrefix": "10.0.0.0/24", "access": "Allow", "direction": "Inbound" } } ] } resource securityRules 'Microsoft.Network/networkSecurityGroups/securityRules@2022-11-01' = { name: 'securityRules' properties: { direction: 'Inbound' access: 'Allow' protocol: '*' destinationPortRange: '22' sourceAddressPrefix: '10.0.0.0/24' } } ResourcesDocumentation
Standards |
||||||||||||
azureresourcemanager:S6364 |
Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident. Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident. Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIncrease the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident. Sensitive Code ExampleFor Azure App Service: resource webApp 'Microsoft.Web/sites@2022-03-01' = { name: 'webApp' } resource backup 'config@2022-03-01' = { name: 'backup' parent: webApp properties: { backupSchedule: { frequencyInterval: 1 frequencyUnit: 'Day' keepAtLeastOneBackup: true retentionPeriodInDays: 2 // Sensitive } } } { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2022-03-01", "name": "webApp", }, { "type": "Microsoft.Web/sites/config", "apiVersion": "2022-03-01", "name": "webApp/backup", "properties": { "backupSchedule": { "frequencyInterval": 1, "frequencyUnit": "Day", "keepAtLeastOneBackup": true, "retentionPeriodInDays": 2 } }, "dependsOn": [ "[resourceId('Microsoft.Web/sites', 'webApp')]" ] } ] } For Azure Cosmos DB accounts: resource cosmosDb 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = { properties: { backupPolicy: { type: 'Periodic' periodicModeProperties: { backupIntervalInMinutes: 1440 backupRetentionIntervalInHours: 8 // Sensitive } } } } { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.DocumentDB/databaseAccounts", "apiVersion": "2023-04-15", "properties": { "backupPolicy": { "type": "Periodic", "periodicModeProperties": { "backupIntervalInMinutes": 1440, "backupRetentionIntervalInHours": 8 } } } } ] } For Azure Backup vault policies: resource vault 'Microsoft.RecoveryServices/vaults@2023-01-01' = { name: 'testVault' resource backupPolicy 'backupPolicies@2023-01-01' = { name: 'backupPolicy' properties: { backupManagementType: 'AzureSql' retentionPolicy: { retentionPolicyType: 'SimpleRetentionPolicy' retentionDuration: { count: 2 // Sensitive durationType: 'Days' } } } } } { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.RecoveryServices/vaults", "apiVersion": "2023-01-01", "name": "testVault", "resources": [ { "type": "backupPolicies", "apiVersion": "2023-01-01", "name": "testVault/backupPolicy", "properties": { "backupManagementType": "AzureSql", "retentionPolicy": { "retentionPolicyType": "SimpleRetentionPolicy", "retentionDuration": { "count": 2, "durationType": "Days" } } } } ] } ] } Compliant SolutionFor Azure App Service: resource webApp 'Microsoft.Web/sites@2022-03-01' = { name: 'webApp' } resource backup 'config@2022-03-01' = { name: 'backup' parent: webApp properties: { backupSchedule: { frequencyInterval: 1 frequencyUnit: 'Day' keepAtLeastOneBackup: true retentionPeriodInDays: 8 } } } { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2022-03-01", "name": "webApp", }, { "type": "Microsoft.Web/sites/config", "apiVersion": "2022-03-01", "name": "webApp/backup", "properties": { "backupSchedule": { "frequencyInterval": 1, "frequencyUnit": "Day", "keepAtLeastOneBackup": true, "retentionPeriodInDays": 30 } }, "dependsOn": [ "[resourceId('Microsoft.Web/sites', 'webApp')]" ] } ] } For Azure Cosmos DB accounts: resource cosmosDb 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = { properties: { backupPolicy: { type: 'Periodic' periodicModeProperties: { backupIntervalInMinutes: 1440 backupRetentionIntervalInHours: 192 } } } } { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.DocumentDB/databaseAccounts", "apiVersion": "2023-04-15", "properties": { "backupPolicy": { "type": "Periodic", "periodicModeProperties": { "backupIntervalInMinutes": 1440, "backupRetentionIntervalInHours": 720 } } } } ] } For Azure Backup vault policies: resource vault 'Microsoft.RecoveryServices/vaults@2023-01-01' = { name: 'testVault' resource backupPolicy 'backupPolicies@2023-01-01' = { name: 'backupPolicy' properties: { backupManagementType: 'AzureSql' retentionPolicy: { retentionPolicyType: 'SimpleRetentionPolicy' retentionDuration: { count: 8 durationType: 'Days' } } } } } { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.RecoveryServices/vaults", "apiVersion": "2023-01-01", "name": "testVault", "resources": [ { "type": "backupPolicies", "apiVersion": "2023-01-01", "name": "testVault/backupPolicy", "properties": { "backupManagementType": "AzureSql", "retentionPolicy": { "retentionPolicyType": "SimpleRetentionPolicy", "retentionDuration": { "count": 30, "durationType": "Days" } } } } ] } ] } |
||||||||||||
azureresourcemanager:S6379 |
Enabling Azure resource-specific admin accounts can reduce an organization’s ability to protect itself against account or service account thefts. Full Administrator permissions fail to correctly separate duties and create potentially critical attack vectors on the impacted resources. In case of abuse of elevated permissions, both the data on which impacted resources operate and their access traceability are at risk. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDisable the administrative accounts or permissions in this Azure resource. Sensitive Code ExampleFor Azure Batch Pools: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Batch/batchAccounts/pools", "apiVersion": "2022-10-01", "properties": { "startTask": { "userIdentity": { "autoUser": { "elevationLevel": "Admin" } } } } } ] } resource AdminBatchPool 'Microsoft.Batch/batchAccounts/pools@2022-10-01' = { properties: { startTask: { userIdentity: { autoUser: { elevationLevel: 'Admin' // Sensitive } } } } } For Azure Container Registries: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.ContainerRegistry/registries", "apiVersion": "2023-01-01-preview", "properties": { "adminUserEnabled": true } } ] } resource acrAdminUserDisabled 'Microsoft.ContainerRegistry/registries@2021-09-01' = { properties: { adminUserEnabled: true // Sensitive } } Compliant SolutionFor Azure Batch Pools: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Batch/batchAccounts/pools", "apiVersion": "2022-10-01", "properties": { "startTask": { "userIdentity": { "autoUser": { "elevationLevel": "NonAdmin" } } } } } ] } resource AdminBatchPool 'Microsoft.Batch/batchAccounts/pools@2022-10-01' = { properties: { startTask: { userIdentity: { autoUser: { elevationLevel: 'NonAdmin' } } } } } For Azure Container Registries: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.ContainerRegistry/registries", "apiVersion": "2023-01-01-preview", "properties": { "adminUserEnabled": false } } ] } resource acrAdminUserDisabled 'Microsoft.ContainerRegistry/registries@2021-09-01' = { properties: { adminUserEnabled: false } } See |
||||||||||||
azureresourcemanager:S6380 |
Allowing anonymous access can reduce an organization’s ability to protect itself against attacks on its Azure resources. Security incidents may include disrupting critical functions, data theft, and additional Azure subscription costs due to resource overload. Using authentication coupled with fine-grained authorizations helps bring defense-in-depth and bring traceability to investigators of security incidents. Depending on the affected Azure resource, multiple authentication choices are possible: Active Directory Authentication, OpenID implementations (Google, Microsoft, etc.) or native Azure mechanisms. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesEnable authentication in this Azure resource, and disable anonymous access. If only Basic Authentication is available, enable it. Sensitive Code ExampleFor App Service: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2022-03-01", "name": "example" } ] } resource appService 'Microsoft.Web/sites@2022-09-01' = { name: 'example' // Sensitive: no authentication defined } For API Management: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.ApiManagement/service", "apiVersion": "2022-09-01-preview", "name": "example" } ] } resource apiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = { name: 'example' // Sensitive: no portal authentication defined resource apis 'apis@2022-09-01-preview' = { name: 'exampleApi' properties: { path: '/test' // Sensitive: no API authentication defined } } } For Data Factory Linked Services: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DataFactory/factories/linkedservices", "apiVersion": "2018-06-01", "name": "example", "properties": { "type": "Web", "typeProperties": { "authenticationType": "Anonymous" } } } ] } resource linkedService 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = { name: 'example' properties: { type: 'Web' typeProperties: { authenticationType: 'Anonymous' // Sensitive } } } For Storage Accounts and Storage Containers: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2022-09-01", "name": "example", "properties": { "allowBlobPublicAccess": true } } ] } resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: 'example' properties: { allowBlobPublicAccess: true // Sensitive } } { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2022-09-01", "name": "example", "resources": [ { "type": "blobServices/containers", "apiVersion": "2022-09-01", "name": "blobContainerExample", "properties": { "publicAccess": "Blob" } } ] } ] } resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: 'example' resource blobService 'blobServices@2022-09-01' = { name: 'default' resource containers 'containers@2022-09-01' = { name: 'exampleContainer' properties: { publicAccess: 'Blob' // Sensitive } } } } For Redis Caches: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Cache/redis", "apiVersion": "2022-06-01", "name": "example", "properties": { "redisConfiguration": { "authnotrequired": "true" } } } ] } resource redisCache 'Microsoft.Cache/redis@2023-04-01' = { name: 'example' location: location properties: { redisConfiguration: { authnotrequired: 'true' // Sensitive } } } Compliant SolutionFor App Services and equivalent: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2022-03-01", "name": "example", "resources": [ { "type": "config", "apiVersion": "2022-03-01", "name": "authsettingsV2", "properties": { "globalValidation": { "requireAuthentication": true, "unauthenticatedClientAction": "RedirectToLoginPage" } } } ] } ] } resource appService 'Microsoft.Web/sites@2022-09-01' = { name: 'example' resource authSettings 'config@2022-09-01' = { // Compliant name: 'authsettingsV2' properties: { globalValidation: { requireAuthentication: true unauthenticatedClientAction: 'AllowAnonymous' } platform: { enabled: true } } } } For API Management: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.ApiManagement/service", "apiVersion": "2022-09-01-preview", "name": "example", "resources": [ { "type": "portalsettings", "apiVersion": "2022-09-01-preview", "name": "signin", "properties": { "enabled": true } }, { "type": "apis", "apiVersion": "2022-09-01-preview", "name": "exampleApi", "properties": { "authenticationSettings": { "openid": { "bearerTokenSendingMethods": ["authorizationHeader"], "openidProviderId": "<an OpenID provider ID>" } } } } ] } ] } resource apiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = { name: 'example' resource portalSettings 'portalsettings@2022-09-01-preview' = { name: 'signin' properties: { enabled: true // Compliant: Sign-in is enabled for portal access } } resource apis 'apis@2022-09-01-preview' = { name: 'exampleApi' properties: { path: '/test' authenticationSettings: { // Compliant: API has authentication enabled openid: { bearerTokenSendingMethods: ['authorizationHeader'] openidProviderId: '<an OpenID provider ID>' } } } } } For Data Factory Linked Services: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DataFactory/factories/linkedservices", "apiVersion": "2018-06-01", "name": "example", "properties": { "type": "Web", "typeProperties": { "authenticationType": "Basic" } } } ] } @secure() @description('The password for authentication') param password string resource linkedService 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = { name: 'example' properties: { type: 'Web' typeProperties: { authenticationType: 'Basic' // Compliant username: 'test' password: { type: 'SecureString' value: password } } } } For Storage Accounts: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2022-09-01", "name": "example", "properties": { "allowBlobPublicAccess": false } } ] } resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: 'example' properties: { allowBlobPublicAccess: false // Compliant } } { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2022-09-01", "name": "example", "resources": [ { "type": "blobServices/containers", "apiVersion": "2022-09-01", "name": "blobContainerExample", "properties": { "publicAccess": "None" } } ] } ] } resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = { name: 'example' resource blobService 'blobServices@2022-09-01' = { name: 'default' resource containers 'containers@2022-09-01' = { name: 'exampleContainer' properties: { publicAccess: 'None' // Compliant } } } } For Redis Caches: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Cache/redis", "apiVersion": "2022-06-01", "name": "example", "properties": { "redisConfiguration": {} } } ] } resource redisCache 'Microsoft.Cache/redis@2023-04-01' = { name: 'example' location: location properties: { redisConfiguration: { // Compliant: authentication is enabled by default } } } See |
||||||||||||
azureresourcemanager:S6381 |
Azure Resource Manager offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users. An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner. This rule raises an issue when one of the following roles is assigned:
Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding Practices
Sensitive Code Example{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Authorization/roleAssignments", "apiVersion": "2022-04-01", "properties": { "description": "Assign the contributor role", "principalId": "string", "principalType": "ServicePrincipal", "roleDefinitionId": "[resourceId('Microsoft.Authorization/roleDefinitions', 'b24988ac-6180-42a0-ab88-20f7382dd24c')]" } } ] } resource symbolicname 'Microsoft.Authorization/roleAssignments@2022-04-01' = { scope: tenant() properties: { description: 'Assign the contributor role' principalId: 'string' principalType: 'ServicePrincipal' roleDefinitionId: resourceId('Microsoft.Authorization/roleAssignments', 'b24988ac-6180-42a0-ab88-20f7382dd24c') // Sensitive } } Compliant Solution{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Authorization/roleAssignments", "apiVersion": "2022-04-01", "properties": { "description": "Assign the reader role", "principalId": "string", "principalType": "ServicePrincipal", "roleDefinitionId": "[resourceId('Microsoft.Authorization/roleDefinitions', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')]" } } ] } resource symbolicname 'Microsoft.Authorization/roleAssignments@2022-04-01' = { scope: tenant() properties: { description: 'Assign the reader role' principalId: 'string' principalType: 'ServicePrincipal' roleDefinitionId: resourceId('Microsoft.Authorization/roleAssignments', 'acdd72a7-3385-48ef-bd42-f606fba81ae7') } } See
|
||||||||||||
azureresourcemanager:S6385 |
Defining a custom role at the Why is this an issue?In Azure, the Because it is a powerful entitlement, it should be granted to as few users as possible. When a custom role has the same level of permissions as the What is the potential impact?Custom roles that provide the same level of permissions as If the affected role is unexpectedly assigned to users, they can compromise the affected scope. They can do so in the long term by assigning dangerous roles to other users or entities. Depending on the scope to which the role is assignable, the exact impact of a successful exploitation may vary. It generally ranges from data compromise to the takeover of the cloud infrastructure. Infrastructure takeoverBy obtaining the right role, an attacker can gain control over part or all of the Azure infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining. This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions. Furthermore, corporate Azure infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data, and to cause more damage to the overall infrastructure. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers with the correct role could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Financial lossFinancial losses can occur when a malicious user is able to use a paid third-party-provided service. Each users assigned with a bad role will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use will lead to added costs with the Azure service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected environment. This might result in a partial denial of service for all legitimate users. How to fix itTo reduce the risk of intrusion of a compromised owner, it is recommended to limit the number of subscription owners. Code examplesNoncompliant code example{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Authorization/roleDefinitions", "apiVersion": "2022-04-01", "properties": { "permissions": [ { "actions": ["*"], "notActions": [] } ], "assignableScopes": [ "[subscription().id]" ] } } ] } targetScope = 'managementGroup' resource roleDef 'Microsoft.Authorization/roleDefinitions@2022-04-01' = { // Sensitive properties: { permissions: [ { actions: ['*'] notActions: [] } ] assignableScopes: [ managementGroup().id ] } } Compliant solution{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Authorization/roleDefinitions", "apiVersion": "2022-04-01", "properties": { "permissions": [ { "actions": ["Microsoft.Compute/*"], "notActions": [] } ], "assignableScopes": [ "[subscription().id]" ] } } ] } targetScope = 'managementGroup' resource roleDef 'Microsoft.Authorization/roleDefinitions@2022-04-01' = { properties: { permissions: [ { actions: ['Microsoft.Compute/*'] notActions: [] } ] assignableScopes: [ managementGroup().id ] } } Going the extra mileHere is a list of recommendations that can be followed regarding good usage of roles:
ResourcesDocumentation
Standards |
||||||||||||
azureresourcemanager:S6387 |
Azure RBAC roles can be assigned to users, groups, or service principals. A role assignment grants permissions on a predefined set of resources called "scope". The widest scopes a role can be assigned to are:
In case of security incidents involving a compromised identity (user, group, or service principal), limiting its role assignment to the narrowest scope possible helps separate duties and limits what resources are at risk. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding Practices
Sensitive Code ExampletargetScope = 'subscription' // Sensitive resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = { name: guid(subscription().id, 'exampleRoleAssignment') } { "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Authorization/roleAssignments", "apiVersion": "2022-04-01", "name": "[guid(subscription().id, 'exampleRoleAssignment')]" } ] } Compliant SolutiontargetScope = 'resourceGroup' resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = { name: guid(resourceGroup().id, 'exampleRoleAssignment') } { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Authorization/roleAssignments", "apiVersion": "2022-04-01", "name": "[guid(resourceGroup().id, 'exampleRoleAssignment')]" } ] } See
|
||||||||||||
azureresourcemanager:S6413 |
Defining a short log retention duration can reduce an organization’s ability to backtrace the actions of malicious actors in case of a security incident. Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions. Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIncrease the log retention period to an amount of time sufficient enough to be able to investigate and restore service in case of an incident. Sensitive Code Example{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Network/firewallPolicies", "apiVersion": "2022-07-01", "properties": { "insights": { "isEnabled": true, "retentionDays": 7 } } } ] } resource firewallPolicy 'Microsoft.Network/firewallPolicies@2022-07-01' = { properties: { insights: { isEnabled: true retentionDays: 7 // Sensitive } } } For Microsoft Network Network Watchers Flow Logs: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "networkWatchers/example", "type": "Microsoft.Network/networkWatchers/flowLogs", "apiVersion": "2022-07-01", "properties": { "retentionPolicy": { "days": 7, "enabled": true } } } ] } resource networkWatchersFlowLogs 'Microsoft.Network/networkWatchers/flowLogs@2022-07-01' = { properties: { retentionPolicy: { days: 7 enabled: true } } } For Microsoft SQL Servers Auditing Settings: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example/default", "type": "Microsoft.Sql/servers/auditingSettings", "apiVersion": "2021-11-01", "properties": { "retentionDays": 7, "state": "Enabled" } } ] } resource sqlServerAudit 'Microsoft.Sql/servers/auditingSettings@2021-11-01' = { properties: { retentionDays: 7 // Sensitive } } This rule also applies to log retention periods that are too short, on the following resources:
Compliant Solution{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.Network/firewallPolicies", "apiVersion": "2022-07-01", "properties": { "insights": { "isEnabled": true, "retentionDays": 30 } } } ] } resource firewallPolicy 'Microsoft.Network/firewallPolicies@2022-07-01' = { properties: { insights: { isEnabled: true retentionDays: 30 } } } For Microsoft Network Network Watchers Flow Logs: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "networkWatchers/example", "type": "Microsoft.Network/networkWatchers/flowLogs", "apiVersion": "2022-07-01", "properties": { "retentionPolicy": { "days": 30, "enabled": true } } } ] } resource networkWatchersFlowLogs 'Microsoft.Network/networkWatchers/flowLogs@2022-07-01' = { properties: { retentionPolicy: { days: 30 enabled: true } } } For Microsoft SQL Servers Auditing Settings: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example/default", "type": "Microsoft.Sql/servers/auditingSettings", "apiVersion": "2021-11-01", "properties": { "retentionDays": 30, "state": "Enabled" } } ] } resource sqlServerAudit 'Microsoft.Sql/servers/auditingSettings@2021-11-01' = { properties: { retentionDays: 30 } } Above code also applies to other types defined in previous paragraph. |
||||||||||||
azureresourcemanager:S6382 |
Disabling certificate-based authentication can reduce an organization’s ability to react against attacks on its critical functions and data. Azure offers various authentication options to access resources: Anonymous connections, Basic authentication, password-based authentication, and certificate-based authentication. Choosing certificate-based authentication helps bring client/host trust by allowing the host to verify the client and vice versa. It cannot be forged or forwarded by a man-in-the-middle eavesdropper, and the certificate’s private key is never sent over the network so it’s harder to steal than a password. In case of a security incident, certificates help bring investigators traceability and allow security operations teams to react faster. For example, all compromised certificates could be revoked individually, or an issuing certificate could be revoked which causes all the certificates it issued to become untrusted. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesEnable certificate-based authentication. Sensitive Code ExampleWhere the use of client certificates is controlled by a boolean value, such as:
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.SignalRService/webPubSub", "apiVersion": "2020-07-01-preview", "name": "example", "properties": { "tls": { "clientCertEnabled": false } } } ] } resource example 'Microsoft.SignalRService/webPubSub@2020-07-01-preview' = { name: 'example' properties: { tls: { clientCertEnabled: false // Sensitive } } } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2015-08-01", "name": "example", "properties": { "clientCertEnabled": false } } ] } resource example 'Microsoft.Web/sites@2015-08-01' = { name: 'example' properties: { clientCertEnabled: false // Sensitive } } Where the use of client certificates can be made optional, such as:
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2015-08-01", "name": "example", "properties": { "clientCertEnabled": true, "clientCertMode": "Optional" } } ] } resource example 'Microsoft.Web/sites@2015-08-01' = { name: 'example' properties: { clientCertEnabled: true clientCertMode: 'Optional' // Sensitive } } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.App/containerApps", "apiVersion": "2022-10-01", "name": "example", "properties": { "configuration": { "ingress": { "clientCertificateMode": "accept" } } } } ] } resource example 'Microsoft.App/containerApps@2022-10-01' = { name: 'example' properties: { configuration: { ingress: { clientCertificateMode: 'accept' // Sensitive } } } } Where client certificates can be used to authenticate outbound requests, such as:
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DataFactory/factories/linkedservices", "apiVersion": "2018-06-01", "name": "factories/example", "properties": { "type": "Web", "typeProperties": { "authenticationType": "Basic" } } } ] } resource example 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = { name: 'example' properties: { type: 'Web' typeProperties: { authenticationType: 'Basic' // Sensitive } } } Where a list of permitted client certificates must be provided, such as:
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DocumentDB/cassandraClusters", "apiVersion": "2021-10-15", "name": "example", "properties": { "clientCertificates": [] } } ] } resource example 'Microsoft.DocumentDB/cassandraClusters@2021-10-15' = { name: 'example' properties: { clientCertificates: [] // Sensitive } } Where a resouce can use both certificate-based and password-based authentication, such as:
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.ContainerRegistry/registries/tokens", "apiVersion": "2022-12-01", "name": "registries/example", "properties": { "credentials": { "passwords": [ { "name": "password1" } ] } } } ] } resource example 'Microsoft.ContainerRegistry/registries/tokens@2022-12-01' = { name: 'example' properties: { credentials: { passwords: [ // Sensitive { name: 'password1' } ] } } } Compliant SolutionWhere the use of client certificates is controlled by a boolean value: { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.SignalRService/webPubSub", "apiVersion": "2020-07-01-preview", "name": "example", "properties": { "tls": { "clientCertEnabled": true } } } ] } resource example 'Microsoft.SignalRService/webPubSub@2020-07-01-preview' = { name: 'example' properties: { tls: { clientCertEnabled: true } } } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2015-08-01", "name": "example", "properties": { "clientCertEnabled": true, "clientCertMode": "Required" } } ] } resource example 'Microsoft.Web/sites@2015-08-01' = { name: 'example' properties: { clientCertEnabled: true clientCertMode: 'Required' } } Where the use of client certificates can be made optional: { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.Web/sites", "apiVersion": "2015-08-01", "name": "example", "properties": { "clientCertEnabled": true, "clientCertMode": "Required" } } ] } resource example 'Microsoft.Web/sites@2015-08-01' = { name: 'example' properties: { clientCertEnabled: true clientCertMode: 'Required' } } { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.App/containerApps", "apiVersion": "2022-10-01", "name": "example", "properties": { "configuration": { "ingress": { "clientCertificateMode": "require" } } } } ] } resource example 'Microsoft.App/containerApps@2022-10-01' = { name: 'example' properties: { configuration: { ingress: { clientCertificateMode: 'require' } } } } Where client certificates can be used to authenticate outbound requests: { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DataFactory/factories/linkedservices", "apiVersion": "2018-06-01", "name": "example", "properties": { "type": "Web", "typeProperties": { "authenticationType": "ClientCertificate" } } } ] } resource example 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = { name: 'example' properties: { type: 'Web' typeProperties: { authenticationType: 'ClientCertificate' } } } Where a list of permitted client certificates must be provided: { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.DocumentDB/cassandraClusters", "apiVersion": "2021-10-15", "name": "example", "properties": { "clientCertificates": [ { "pem": "[base64-encoded certificate]" } ] } } ] } resource example 'Microsoft.DocumentDB/cassandraClusters@2021-10-15' = { name: 'example' properties: { clientCertificates: [ { pem: '[base64-encoded certificate]' } ] } } Where a resouce can use both certificate-based and password-based authentication: { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "type": "Microsoft.ContainerRegistry/registries/tokens", "apiVersion": "2022-12-01", "name": "example", "properties": { "credentials": { "certificates": [ { "name": "certificate1", "encodedPemCertificate": "[base64-encoded certificate]" } ] } } } ] } resource example 'Microsoft.ContainerRegistry/registries/tokens@2022-12-01' = { name: 'example' properties: { credentials: { certificates: [ { name: 'certificate1' encodedPemCertificate: '[base64-encoded certificate]' } ] } } } See |
||||||||||||
azureresourcemanager:S6383 |
Disabling Role-Based Access Control (RBAC) on Azure resources can reduce an organization’s ability to protect itself against access controls being compromised. To be considered safe, access controls must follow the principle of least privilege and correctly segregate duties amongst users. RBAC helps enforce these practices by adapting the organization’s access control needs into explicit role-based policies: It helps keeping access controls maintainable and sustainable. Furthermore, RBAC allows operations teams to work faster during a security incident. It helps to mitigate account theft or intrusions by quickly shutting down accesses. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFor AKS Azure Kubernetes Service: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.ContainerService/managedClusters", "apiVersion": "2023-03-01", "properties": { "aadProfile": { "enableAzureRBAC": false }, "enableRBAC": false } } ] } resource aks 'Microsoft.ContainerService/managedClusters@2023-03-01' = { properties: { aadProfile: { enableAzureRBAC: false // Sensitive } enableRBAC: false // Sensitive } } For Key Vault: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.KeyVault/vaults", "apiVersion": "2022-07-01", "properties": { "enableRbacAuthorization": false } } ] } resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01' = { properties: { enableRbacAuthorization: false // Sensitive } } Compliant SolutionFor AKS Azure Kubernetes Service: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.ContainerService/managedClusters", "apiVersion": "2023-03-01", "properties": { "aadProfile": { "enableAzureRBAC": true }, "enableRBAC": true } } ] } resource aks 'Microsoft.ContainerService/managedClusters@2023-03-01' = { properties: { aadProfile: { enableAzureRBAC: true // Compliant } enableRBAC: true // Compliant } } For Key Vault: { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "resources": [ { "name": "example", "type": "Microsoft.KeyVault/vaults", "apiVersion": "2022-07-01", "properties": { "enableRbacAuthorization": true } } ] } resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01' = { properties: { enableRbacAuthorization: true // Compliant } } See |
||||||||||||
terraform:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code ExampleFor AWS Kinesis Data Streams server-side encryption: resource "aws_kinesis_stream" "sensitive_stream" { encryption_type = "NONE" # Sensitive } For Amazon ElastiCache: resource "aws_elasticache_replication_group" "example" { replication_group_id = "example" replication_group_description = "example" transit_encryption_enabled = false # Sensitive } For Amazon ECS: resource "aws_ecs_task_definition" "ecs_task" { family = "service" container_definitions = file("task-definition.json") volume { name = "storage" efs_volume_configuration { file_system_id = aws_efs_file_system.fs.id transit_encryption = "DISABLED" # Sensitive } } } For Amazon OpenSearch domains: resource "aws_elasticsearch_domain" "example" { domain_name = "example" domain_endpoint_options { enforce_https = false # Sensitive } node_to_node_encryption { enabled = false # Sensitive } } For Amazon MSK communications between clients and brokers: resource "aws_msk_cluster" "sensitive_data_cluster" { encryption_info { encryption_in_transit { client_broker = "TLS_PLAINTEXT" # Sensitive in_cluster = false # Sensitive } } } For AWS Load Balancer Listeners: resource "aws_lb_listener" "front_load_balancer" { protocol = "HTTP" # Sensitive default_action { type = "redirect" redirect { protocol = "HTTP" } } } HTTP protocol is used for GCP Region Backend Services: resource "google_compute_region_backend_service" "example" { name = "example-service" region = "us-central1" health_checks = [google_compute_region_health_check.region.id] connection_draining_timeout_sec = 10 session_affinity = "CLIENT_IP" load_balancing_scheme = "EXTERNAL" protocol = "HTTP" # Sensitive } Compliant SolutionFor AWS Kinesis Data Streams server-side encryption: resource "aws_kinesis_stream" "compliant_stream" { encryption_type = "KMS" } For Amazon ElastiCache: resource "aws_elasticache_replication_group" "example" { replication_group_id = "example" replication_group_description = "example" transit_encryption_enabled = true } For Amazon ECS: resource "aws_ecs_task_definition" "ecs_task" { family = "service" container_definitions = file("task-definition.json") volume { name = "storage" efs_volume_configuration { file_system_id = aws_efs_file_system.fs.id transit_encryption = "ENABLED" } } } For Amazon OpenSearch domains: resource "aws_elasticsearch_domain" "example" { domain_name = "example" domain_endpoint_options { enforce_https = true } node_to_node_encryption { enabled = true } } For Amazon MSK communications between clients and brokers, data in transit is encrypted by default,
allowing you to omit writing the resource "aws_msk_cluster" "sensitive_data_cluster" { encryption_info { encryption_in_transit { client_broker = "TLS" in_cluster = true } } } For AWS Load Balancer Listeners: resource "aws_lb_listener" "front_load_balancer" { protocol = "HTTP" default_action { type = "redirect" redirect { protocol = "HTTPS" } } } HTTPS protocol is used for GCP Region Backend Services: resource "google_compute_region_backend_service" "example" { name = "example-service" region = "us-central1" health_checks = [google_compute_region_health_check.region.id] connection_draining_timeout_sec = 10 session_affinity = "CLIENT_IP" load_balancing_scheme = "EXTERNAL" protocol = "HTTPS" } ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
terraform:S6302 |
A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information. Ask Yourself WhetherIdentities obtaining all the permissions:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used. Sensitive Code ExampleA customer-managed policy for AWS that grants all permissions by using the wildcard (*) in the resource "aws_iam_policy" "example" { name = "noncompliantpolicy" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = [ "*" # Sensitive ] Effect = "Allow" Resource = [ aws_s3_bucket.mybucket.arn ] } ] }) } A customer-managed policy for GCP that grants all permissions by using the actions admin role resource "google_project_iam_binding" "example" { project = "example" role = "roles/owner" # Sensitive members = [ "user:jane@example.com", ] } Compliant SolutionA customer-managed policy for AWS that grants only the required permissions: resource "aws_iam_policy" "example" { name = "compliantpolicy" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = [ "s3:GetObject" ] Effect = "Allow" Resource = [ aws_s3_bucket.mybucket.arn ] } ] }) } A customer-managed policy for GCP that grants restricted permissions by using the actions admin role resource "google_project_iam_binding" "example" { project = "example" role = "roles/actions.Viewer" members = [ "user:jane@example.com", ] } See
|
||||||||||||
terraform:S6303 |
Using unencrypted RDS DB resources exposes data to unauthorized access. This situation can occur in a variety of scenarios, such as:
After a successful intrusion, the underlying applications are exposed to:
AWS-managed encryption at rest reduces this risk with a simple switch. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine. Sensitive Code ExampleFor aws_db_instance and aws_rds_cluster: resource "aws_db_instance" "example" { storage_encrypted = false # Sensitive, disabled by default } resource "aws_rds_cluster" "example" { storage_encrypted = false # Sensitive, disabled by default } Compliant SolutionFor aws_db_instance and aws_rds_cluster: resource "aws_db_instance" "example" { storage_encrypted = true } resource "aws_rds_cluster" "example" { storage_encrypted = true } See
|
||||||||||||
terraform:S6304 |
A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur. Ask Yourself WhetherThe AWS account has more than one resource with different levels of sensitivity. A risk exists if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors. Sensitive Code ExampleUpdate permission is granted for all policies using the wildcard (*) in the resource "aws_iam_policy" "noncompliantpolicy" { name = "noncompliantpolicy" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = [ "iam:CreatePolicyVersion" ] Effect = "Allow" Resource = [ "*" # Sensitive ] } ] }) } Compliant SolutionRestrict update permission to the appropriate subset of policies: resource "aws_iam_policy" "compliantpolicy" { name = "compliantpolicy" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = [ "iam:CreatePolicyVersion" ] Effect = "Allow" Resource = [ "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/team1/*" ] } ] }) } Exceptions
See
|
||||||||||||
terraform:S6388 |
Using unencrypted cloud storages can lead to data exposure. In the case that adversaries gain physical access to the storage medium they are able to access unencrypted information. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt cloud storages that contain sensitive information. Sensitive Code Exampleresource "azurerm_data_lake_store" "store" { name = "store" encryption_state = "Disabled" # Sensitive } Compliant Solutionresource "azurerm_data_lake_store" "store" { name = "store" encryption_state = "Enabled" encryption_type = "ServiceManaged" } See |
||||||||||||
terraform:S6265 |
Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users. The following canned ACLs are security-sensitive:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege policy, ie to grant necessary permissions only to users for their required tasks. In the context
of canned ACL, set it to Sensitive Code ExampleAll users (ie: anyone in the world authenticated or not) have read and write permissions with the resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive bucket = "mynoncompliantbucketname" acl = "public-read-write" } Compliant SolutionWith the resource "aws_s3_bucket" "mycompliantbucket" { # Compliant bucket = "mycompliantbucketname" acl = "private" } See
|
||||||||||||
terraform:S6308 |
Amazon Elasticsearch Service (ES) is a managed service to host Elasticsearch instances. To harden domain (cluster) data in case of unauthorized access, ES provides data-at-rest encryption if the Elasticsearch version is 5.1 or above. Enabling encryption at rest will help protect:
Thus, if adversaries gain physical access to the storage medium, they cannot access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to encrypt Elasticsearch domains that contain sensitive information. Encryption and decryption are handled transparently by ES, so no further modifications to the application are necessary. Sensitive Code Exampleresource "aws_elasticsearch_domain" "elasticsearch" { encrypt_at_rest { enabled = false # Sensitive, disabled by default } } Compliant Solutionresource "aws_elasticsearch_domain" "elasticsearch" { encrypt_at_rest { enabled = true } } See
|
||||||||||||
terraform:S6380 |
Allowing anonymous access can reduce an organization’s ability to protect itself against attacks on its Azure resources. Security incidents may include disrupting critical functions, data theft, and additional Azure subscription costs due to resource overload. Using authentication coupled with fine-grained authorizations helps bring defense-in-depth and bring traceability to investigators of security incidents. Depending on the affected Azure resource, multiple authentication choices are possible: Active Directory Authentication, OpenID implementations (Google, Microsoft, etc.) or native Azure mechanisms. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesEnable authentication in this Azure resource, and disable anonymous access. If only Basic Authentication is available, enable it. Sensitive Code ExampleFor App Services and equivalent: resource "azurerm_function_app" "example" { name = "example" auth_settings { enabled = false # Sensitive } auth_settings { enabled = true unauthenticated_client_action = "AllowAnonymous" # Sensitive } } For API Management: resource "azurerm_api_management_api" "example" { # Sensitive, the openid_authentication block is missing name = "example-api" } resource "azurerm_api_management" "example" { sign_in { enabled = false # Sensitive } } For Data Factory Linked Services: resource "azurerm_data_factory_linked_service_sftp" "example" { authentication_type = "Anonymous" } For Storage Accounts: resource "azurerm_storage_account" "example" { allow_blob_public_access = true # Sensitive } resource "azurerm_storage_container" "example" { container_access_type = "blob" # Sensitive } For Redis Caches: resource "azurerm_redis_cache" "example" { name = "example-cache" redis_configuration { enable_authentication = false # Sensitive } } Compliant SolutionFor App Services and equivalent: resource "azurerm_function_app" "example" { name = "example" auth_settings { enabled = true unauthenticated_client_action = "RedirectToLoginPage" } } For API Management: resource "azurerm_api_management_api" "example" { name = "example-api" openid_authentication { openid_provider_name = azurerm_api_management_openid_connect_provider.example.name } } resource "azurerm_api_management" "example" { sign_in { enabled = true } } For Data Factory Linked Services: resource "azurerm_data_factory_linked_service_sftp" "example" { authentication_type = "Basic" username = local.creds.username password = local.creds.password } resource "azurerm_data_factory_linked_service_odata" "example" { basic_authentication { username = local.creds.username password = local.creds.password } } For Storage Accounts: resource "azurerm_storage_account" "example" { allow_blob_public_access = true } resource "azurerm_storage_container" "example" { container_access_type = "private" } For Redis Caches: resource "azurerm_redis_cache" "example" { name = "example-cache" redis_configuration { enable_authentication = true } } See |
||||||||||||
terraform:S6381 |
Azure Resource Manager offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users. An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner. This rule raises an issue when one of the following roles is assigned:
Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding Practices
Sensitive Code Exampleresource "azurerm_role_assignment" "example" { scope = azurerm_resource_group.example.id role_definition_name = "Owner" # Sensitive principal_id = data.azuread_user.example.id } Compliant Solutionresource "azurerm_role_assignment" "example" { scope = azurerm_resource_group.example.id role_definition_name = "Azure Maps Data Reader" principal_id = data.azuread_user.example.id } See
|
||||||||||||
terraform:S6382 |
Disabling certificate-based authentication can reduce an organization’s ability to react against attacks on its critical functions and data. Azure offers various authentication options to access resources: Anonymous connections, Basic authentication, password-based authentication, and certificate-based authentication. Choosing certificate-based authentication helps bring client/host trust by allowing the host to verify the client and vice versa. It cannot be forged or forwarded by a man-in-the-middle eavesdropper, and the certificate’s private key is never sent over the network so it’s harder to steal than a password. In case of a security incident, certificates help bring investigators traceability and allow security operations teams to react faster. For example, all compromised certificates could be revoked individually, or an issuing certificate could be revoked which causes all the certificates it issued to become untrusted. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesEnable certificate-based authentication. Sensitive Code ExampleFor App Service: resource "azurerm_app_service" "example" { client_cert_enabled = false # Sensitive } For Logic App Standards and Function Apps: resource "azurerm_function_app" "example" { client_cert_mode = "Optional" # Sensitive } For Data Factory Linked Services: resource "azurerm_data_factory_linked_service_web" "example" { authentication_type = "Basic" # Sensitive } For API Management: resource "azurerm_api_management" "example" { sku_name = "Consumption_1" client_certificate_mode = "Optional" # Sensitive } For Linux and Windows Web Apps: resource "azurerm_linux_web_app" "example" { client_cert_enabled = false # Sensitive } resource "azurerm_linux_web_app" "exemple2" { client_cert_enabled = true client_cert_mode = "Optional" # Sensitive } Compliant SolutionFor App Service: resource "azurerm_app_service" "example" { client_cert_enabled = true } For Logic App Standards and Function Apps: resource "azurerm_function_app" "example" { client_cert_mode = "Required" } For Data Factory Linked Services: resource "azurerm_data_factory_linked_service_web" "example" { authentication_type = "ClientCertificate" } For API Management: resource "azurerm_api_management" "example" { sku_name = "Consumption_1" client_certificate_mode = "Required" } For Linux and Windows Web Apps: resource "azurerm_linux_web_app" "exemple" { client_cert_enabled = true client_cert_mode = "Required" } See |
||||||||||||
terraform:S6383 |
Disabling Role-Based Access Control (RBAC) on Azure resources can reduce an organization’s ability to protect itself against access controls being compromised. To be considered safe, access controls must follow the principle of least privilege and correctly segregate duties amongst users. RBAC helps enforce these practices by adapting the organization’s access control needs into explicit role-based policies: It helps keeping access controls maintainable and sustainable. Furthermore, RBAC allows operations teams to work faster during a security incident. It helps to mitigate account theft or intrusions by quickly shutting down accesses. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFor Azure Kubernetes Services: resource "azurerm_kubernetes_cluster" "example" { role_based_access_control { enabled = false # Sensitive } } resource "azurerm_kubernetes_cluster" "example2" { role_based_access_control { enabled = true azure_active_directory { managed = true azure_rbac_enabled = false # Sensitive } } } For Key Vaults: resource "azurerm_key_vault" "example" { enable_rbac_authorization = false # Sensitive } Compliant SolutionFor Azure Kubernetes Services: resource "azurerm_kubernetes_cluster" "example" { role_based_access_control { enabled = true } } resource "azurerm_kubernetes_cluster" "example" { role_based_access_control { enabled = true azure_active_directory { managed = true azure_rbac_enabled = true } } } For Key Vaults: resource "azurerm_key_vault" "example" { enable_rbac_authorization = true } See |
||||||||||||
terraform:S6385 |
Defining a custom role at the Why is this an issue?In Azure, the Because it is a powerful entitlement, it should be granted to as few users as possible. When a custom role has the same level of permissions as the What is the potential impact?Custom roles that provide the same level of permissions as If the affected role is unexpectedly assigned to users, they can compromise the affected scope. They can do so in the long term by assigning dangerous roles to other users or entities. Depending on the scope to which the role is assignable, the exact impact of a successful exploitation may vary. It generally ranges from data compromise to the takeover of the cloud infrastructure. Infrastructure takeoverBy obtaining the right role, an attacker can gain control over part or all of the Azure infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining. This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions. Furthermore, corporate Azure infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data, and to cause more damage to the overall infrastructure. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers with the correct role could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Financial lossFinancial losses can occur when a malicious user is able to use a paid third-party-provided service. Each users assigned with a bad role will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use will lead to added costs with the Azure service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected environment. This might result in a partial denial of service for all legitimate users. How to fix itTo reduce the risk of intrusion of a compromised owner, it is recommended to limit the number of subscription owners. Code examplesNoncompliant code exampleresource "azurerm_role_definition" "example" { # Sensitive name = "example" scope = data.azurerm_subscription.primary.id permissions { actions = ["*"] not_actions = [] } assignable_scopes = [ data.azurerm_subscription.primary.id ] } Compliant solutionresource "azurerm_role_definition" "example" { name = "example" scope = data.azurerm_subscription.primary.id permissions { actions = ["Microsoft.Compute/*"] not_actions = [] } assignable_scopes = [ data.azurerm_subscription.primary.id ] } Going the extra mileHere is a list of recommendations that can be followed regarding good usage of roles:
ResourcesDocumentation
Standards |
||||||||||||
terraform:S6387 |
Azure RBAC roles can be assigned to users, groups, or service principals. A role assignment grants permissions on a predefined set of resources called "scope". The widest scopes a role can be assigned to are:
In case of security incidents involving a compromised identity (user, group, or service principal), limiting its role assignment to the narrowest scope possible helps separate duties and limits what resources are at risk. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding Practices
Sensitive Code Exampleresource "azurerm_role_assignment" "example" { scope = data.azurerm_subscription.primary.id # Sensitive role_definition_name = "Reader" principal_id = data.azuread_user.user.object_id } Compliant Solutionresource "azurerm_role_assignment" "example" { scope = azurerm_resource_group.example.id role_definition_name = "Reader" principal_id = data.azuread_user.user.object_id } See
|
||||||||||||
terraform:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in AWS API GatewayCode examplesThese code samples illustrate how to fix this issue in both APIGateway and ApiGatewayV2. Noncompliant code exampleresource "aws_api_gateway_domain_name" "example" { domain_name = "api.example.com" security_policy = "TLS_1_0" # Noncompliant } The ApiGatewayV2 uses a weak TLS version by default: resource "aws_apigatewayv2_domain_name" "example" { domain_name = "api.example.com" domain_name_configuration {} # Noncompliant } Compliant solutionresource "aws_api_gateway_domain_name" "example" { domain_name = "api.example.com" security_policy = "TLS_1_2" } resource "aws_apigatewayv2_domain_name" "example" { domain_name = "api.example.com" domain_name_configuration { security_policy = "TLS_1_2" } } How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards |
||||||||||||
terraform:S6270 |
Resource-based policies granting access to all users can lead to information leakage. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges. Sensitive Code ExampleThis policy allows all users, including anonymous ones, to access an S3 bucket: resource "aws_s3_bucket_policy" "mynoncompliantpolicy" { # Sensitive bucket = aws_s3_bucket.mybucket.id policy = jsonencode({ Id = "mynoncompliantpolicy" Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { AWS = "*" } Action = [ "s3:PutObject" ] Resource: "${aws_s3_bucket.mybucket.arn}/*" } ] }) } Compliant SolutionThis policy allows only the authorized users: resource "aws_s3_bucket_policy" "mycompliantpolicy" { bucket = aws_s3_bucket.mybucket.id policy = jsonencode({ Id = "mycompliantpolicy" Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { AWS = [ "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root" ] } Action = [ "s3:PutObject" ] Resource = "${aws_s3_bucket.mybucket.arn}/*" } ] }) } See
|
||||||||||||
terraform:S6275 |
Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade. Sensitive Code ExampleFor aws_ebs_volume: resource "aws_ebs_volume" "ebs_volume" { # Sensitive as encryption is disabled by default } resource "aws_ebs_volume" "ebs_volume" { encrypted = false # Sensitive } For aws_ebs_encryption_by_default: resource "aws_ebs_encryption_by_default" "default_encryption" { enabled = false # Sensitive } resource "aws_launch_configuration" "launch_configuration" { root_block_device { # Sensitive as encryption is disabled by default } ebs_block_device { # Sensitive as encryption is disabled by default } } resource "aws_launch_configuration" "launch_configuration" { root_block_device { encrypted = false # Sensitive } ebs_block_device { encrypted = false # Sensitive } } Compliant SolutionFor aws_ebs_volume: resource "aws_ebs_volume" "ebs_volume" { encrypted = true } For aws_ebs_encryption_by_default: resource "aws_ebs_encryption_by_default" "default_encryption" { enabled = true # Optional, default is "true" } resource "aws_launch_configuration" "launch_configuration" { root_block_device { encrypted = true } ebs_block_device { encrypted = true } } See |
||||||||||||
terraform:S6317 |
Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access. Why is this an issue?AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources. For such policies, it is easy to define very broad permissions (by using wildcard If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities. What is the potential impact?AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope. Privilege escalationWhen IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities. For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account. How to fix it in AWS Identity and Access ManagementCode examplesIn this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges. Noncompliant code exampleresource "aws_iam_policy" "example" { name = "example" policy =<<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lambda:UpdateFunctionCode" ], "Resource": "*" } ] } EOF } Compliant solutionThe policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed. resource "aws_iam_policy" "example" { name = "example" policy =<<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lambda:UpdateFunctionCode" ], "Resource": "arn:aws:lambda:us-east-2:123456789012:function:my-function:1" } ] } EOF } How does this work?Principle of least privilegeWhen creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else. To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used. ResourcesDocumentation
Articles & blog posts
Standards |
||||||||||||
terraform:S6319 |
Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary. Sensitive Code ExampleFor aws_sagemaker_notebook_instance: resource "aws_sagemaker_notebook_instance" "notebook" { # Sensitive, encryption disabled by default } Compliant SolutionFor aws_sagemaker_notebook_instance: resource "aws_sagemaker_notebook_instance" "notebook" { kms_key_id = aws_kms_key.enc_key.key_id } See |
||||||||||||
terraform:S6327 |
Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary. Sensitive Code ExampleFor aws_sns_topic: resource "aws_sns_topic" "topic" { # Sensitive, encryption disabled by default name = "sns-unencrypted" } Compliant SolutionFor aws_sns_topic: resource "aws_sns_topic" "topic" { name = "sns-encrypted" kms_master_key_id = aws_kms_key.enc_key.key_id } See |
||||||||||||
terraform:S6403 |
By default, GCP SQL instances offer encryption in transit, with support for TLS, but insecure connections are still accepted. On an unsecured network, such as a public network, the risk of traffic being intercepted is high. When the data isn’t encrypted, an attacker can intercept it and read confidential information. When creating a GCP SQL instance, a public IP address is automatically assigned to it and connections to the SQL instance from public networks can be authorized. TLS is automatically used when connecting to SQL instances through:
Ask Yourself WhetherConnections are not already automatically encrypted by GCP (eg: SQL Auth proxy) and
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt all connections to the SQL instance, whether using public or private IP addresses. However, since private networks can be considered trusted, requiring TLS in this situation is usually a lower priority task. Sensitive Code Exampleresource "google_sql_database_instance" "example" { # Sensitive: tls is not required name = "noncompliant-master-instance" database_version = "POSTGRES_11" region = "us-central1" settings { tier = "db-f1-micro" } } Compliant Solutionresource "google_sql_database_instance" "example" { name = "compliant-master-instance" database_version = "POSTGRES_11" region = "us-central1" settings { tier = "db-f1-micro" ip_configuration { require_ssl = true ipv4_enabled = true } } } See
|
||||||||||||
terraform:S6404 |
Granting public access to GCP resources may reduce an organization’s ability to protect itself against attacks or theft of its GCP resources. To be as prepared as possible in the event of a security incident, authentication combined with fine-grained permissions helps maintain the principle of defense in depth and trace incidents back to the perpetrators. GCP also provides the ability to grant access to a large group of people:
The only thing that changes in these cases is the ability to track user access in the event of an incident. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesExplicitly set access to this resource or function as private. Sensitive Code ExampleFor IAM resources: resource "google_cloudfunctions_function_iam_binding" "example" { members = [ "allUsers", # Sensitive "allAuthenticatedUsers", # Sensitive ] } resource "google_cloudfunctions_function_iam_member" "example" { member = "allAuthenticatedUsers" # Sensitive } For ACL resources: resource "google_storage_bucket_access_control" "example" { entity = "allUsers" # Sensitive } resource "google_storage_bucket_acl" "example" { role_entity = [ "READER:allUsers", # Sensitive "READER:allAuthenticatedUsers", # Sensitive ] } For container clusters: resource "google_container_cluster" "example" { private_cluster_config { enable_private_nodes = false # Sensitive enable_private_endpoint = false # Sensitive } } Compliant SolutionFor IAM resources: resource "google_cloudfunctions_function_iam_binding" "example" { members = [ "serviceAccount:${google_service_account.example.email}", "group:${var.example_group}" ] } resource "google_cloudfunctions_function_iam_member" "example" { member = "user:${var.example_user}" # Sensitive } For ACL resources: resource "google_storage_bucket_access_control" "example" { entity = "user-${var.example_user]" } resource "google_storage_bucket_acl" "example" { role_entity = [ "READER:user-name@example.com", "READER:group-admins@example.com" ] } For container clusters: resource "google_container_cluster" "example" { private_cluster_config { enable_private_nodes = true enable_private_endpoint = true } } See |
||||||||||||
terraform:S6245 |
This rule is deprecated, and will eventually be removed. Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself. There are three SSE options:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys. Sensitive Code ExampleServer-side encryption is not used: resource "aws_s3_bucket" "example" { # Sensitive bucket = "example" } Compliant SolutionServer-side encryption with Amazon S3-managed keys is used for AWS provider version 3 or below: resource "aws_s3_bucket" "example" { bucket = "example" server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } } Server-side encryption with Amazon S3-managed keys is used for AWS provider version 4 or above: resource "aws_s3_bucket" "example" { bucket = "example" } resource "aws_s3_bucket_server_side_encryption_configuration" "example" { bucket = aws_s3_bucket.example.bucket rule { apply_server_side_encryption_by_default { sse_algorithm = "AES256" } } } See
|
||||||||||||
terraform:S6249 |
By default, S3 buckets can be accessed through HTTP and HTTPs protocols. As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to deny all HTTP requests:
Sensitive Code ExampleNo secure policy is attached to this bucket: resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive bucket = "mynoncompliantbucketname" } A policy is defined but forces only HTTPs communication for some users: resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive bucket = "mynoncompliantbucketname" } resource "aws_s3_bucket_policy" "mynoncompliantbucketpolicy" { bucket = "mynoncompliantbucketname" policy = jsonencode({ Version = "2012-10-17" Id = "mynoncompliantbucketpolicy" Statement = [ { Sid = "HTTPSOnly" Effect = "Deny" Principal = [ "arn:aws:iam::123456789123:root" ] # secondary location: only one principal is forced to use https Action = "s3:*" Resource = [ aws_s3_bucket.mynoncompliantbucketpolicy.arn, "${aws_s3_bucket.mynoncompliantbucketpolicy.arn}/*", ] Condition = { Bool = { "aws:SecureTransport" = "false" } } }, ] }) } Compliant SolutionA secure policy that denies all HTTP requests is used: resource "aws_s3_bucket" "mycompliantbucket" { bucket = "mycompliantbucketname" } resource "aws_s3_bucket_policy" "mycompliantpolicy" { bucket = "mycompliantbucketname" policy = jsonencode({ Version = "2012-10-17" Id = "mycompliantpolicy" Statement = [ { Sid = "HTTPSOnly" Effect = "Deny" Principal = "*" Action = "s3:*" Resource = [ aws_s3_bucket.mycompliantbucket.arn, "${aws_s3_bucket.mycompliantbucket.arn}/*", ] Condition = { Bool = { "aws:SecureTransport" = "false" } } }, ] }) } See
|
||||||||||||
terraform:S6329 |
Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption. Depending on the component, inbound access from the Internet can be enabled via:
Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident. This decision increases the likelihood of attacks on the organization, such as:
Ask Yourself WhetherThis cloud resource:
There is a risk if you answered no to any of those questions. Recommended Secure Coding PracticesAvoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites. Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components. The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address. Sensitive Code ExampleFor AWS: resource "aws_instance" "example" { associate_public_ip_address = true # Sensitive } resource "aws_dms_replication_instance" "example" { publicly_accessible = true # Sensitive } For Azure: resource "azurerm_postgresql_server" "example" { public_network_access_enabled = true # Sensitive } resource "azurerm_postgresql_server" "example" { public_network_access_enabled = true # Sensitive } resource "azurerm_kubernetes_cluster" "production" { api_server_authorized_ip_ranges = ["176.0.0.0/4"] # Sensitive default_node_pool { enable_node_public_ip = true # Sensitive } } For GCP: resource "google_compute_instance" "example" { network_interface { network = "default" access_config { # Sensitive # Ephemeral public IP } } Compliant SolutionFor AWS: resource "aws_instance" "example" { associate_public_ip_address = false } resource "aws_dms_replication_instance" "example" { publicly_accessible = false } For Azure: resource "azurerm_postgresql_server" "example" { public_network_access_enabled = false } resource "azurerm_kubernetes_cluster" "production" { api_server_authorized_ip_ranges = ["192.168.0.0/16"] default_node_pool { enable_node_public_ip = false } } For GCP: resource "google_compute_instance" "example" { network_interface { network = google_compute_network.vpc_network_example.name } } Note that setting See
|
||||||||||||
terraform:S6400 |
Granting highly privileged resource rights to users or groups can reduce an organization’s ability to protect against account or service theft. It prevents proper segregation of duties and creates potentially critical attack vectors on affected resources. If elevated access rights are abused or compromised, both the data that the affected resources work with and their access tracking are at risk. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesGrant IAM policies or members a less permissive role: In most cases, granting them read-only privileges is sufficient. Separate tasks by creating multiple roles that do not use a full access role for day-to-day work. If the predefined GCP roles do not include the specific permissions you need, create custom IAM roles. Sensitive Code ExampleFor an IAM policy setup: data "google_iam_policy" "admin" { binding { role = "roles/run.admin" # Sensitive members = [ "user:name@example.com", ] } } resource "google_cloud_run_service_iam_policy" "policy" { location = google_cloud_run_service.default.location project = google_cloud_run_service.default.project service = google_cloud_run_service.default.name policy_data = data.google_iam_policy.admin.policy_data } For an IAM policy binding: resource "google_cloud_run_service_iam_binding" "example" { location = google_cloud_run_service.default.location project = google_cloud_run_service.default.project service = google_cloud_run_service.default.name role = "roles/run.admin" # Sensitive members = [ "user:name@example.com", ] } For adding a member to a policy: resource "google_cloud_run_service_iam_member" "example" { location = google_cloud_run_service.default.location project = google_cloud_run_service.default.project service = google_cloud_run_service.default.name role = "roles/run.admin" # Sensitive member = "user:name@example.com" } Compliant SolutionFor an IAM policy setup: data "google_iam_policy" "admin" { binding { role = "roles/viewer" members = [ "user:name@example.com", ] } } resource "google_cloud_run_service_iam_policy" "example" { location = google_cloud_run_service.default.location project = google_cloud_run_service.default.project service = google_cloud_run_service.default.name policy_data = data.google_iam_policy.admin.policy_data } For an IAM policy binding: resource "google_cloud_run_service_iam_binding" "example" { location = google_cloud_run_service.default.location project = google_cloud_run_service.default.project service = google_cloud_run_service.default.name role = "roles/viewer" members = [ "user:name@example.com", ] } For adding a member to a policy: resource "google_cloud_run_service_iam_member" "example" { location = google_cloud_run_service.default.location project = google_cloud_run_service.default.project service = google_cloud_run_service.default.name role = "roles/viewer" member = "user:name@example.com" } See |
||||||||||||
terraform:S6405 |
SSH keys stored and managed in a project’s metadata can be used to access GCP VM instances. By default, GCP automatically deploys project-level SSH keys to VM instances. Project-level SSH keys can lead to unauthorized access because:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleresource "google_compute_instance" "example" { # Sensitive, because metadata.block-project-ssh-keys is not set to true name = "example" machine_type = "e2-micro" zone = "us-central1-a" network_interface { network = "default" access_config { } } } Compliant Solutionresource "google_compute_instance" "example" { name = "example" machine_type = "e2-micro" zone = "us-central1-a" metadata = { block-project-ssh-keys = true } network_interface { network = "default" access_config { } } } See
|
||||||||||||
terraform:S6406 |
Excessive granting of GCP IAM permissions can allow attackers to exploit an organization’s cloud resources with malicious intent. To prevent improper creation or deletion of resources after an account is compromised, proactive measures include both following GCP Security Insights and ensuring custom roles contain as few privileges as possible. After gaining a foothold in the target infrastructure, sophisticated attacks typically consist of two major parts.
Once the malicious intent is executed, attackers must avoid detection at all costs.
For operations teams to be resilient in this scenario, their organization must apply both:
This rule raises an issue when a custom role grants a number of sensitive permissions (read-write or destructive permission) that is greater than a given parameter. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesTo reduce the risks associated with this role after a compromise:
Sensitive Code ExampleThis custom role grants more than 5 sensitive permissions: resource "google_project_iam_custom_role" "example" { permissions = [ # Sensitive "resourcemanager.projects.create", # Sensitive permission "resourcemanager.projects.delete", # Sensitive permission "resourcemanager.projects.get", "resourcemanager.projects.list", "run.services.create", # Sensitive permission "run.services.delete", # Sensitive permission "run.services.get", "run.services.getIamPolicy", "run.services.setIamPolicy", # Sensitive permission "run.services.list", "run.services.update", # Sensitive permission ] } Compliant SolutionThis custom role grants less than 5 sensitive permissions: resource "google_project_iam_custom_role" "example" { permissions = [ "resourcemanager.projects.get", "resourcemanager.projects.list", "run.services.create", "run.services.delete", "run.services.get", "run.services.getIamPolicy", "run.services.list", "run.services.update", ] } See
|
||||||||||||
terraform:S6281 |
By default S3 buckets are private, it means that only the bucket owner can access it. This access control can be relaxed with ACLs or policies. To prevent permissive policies to be set on a S3 bucket the following settings can be configured:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to configure:
Sensitive Code ExampleBy default, when not set, the resource "aws_s3_bucket" "example" { # Sensitive: no Public Access Block defined for this bucket bucket = "example" } This resource "aws_s3_bucket" "example" { # Sensitive bucket = "examplename" } resource "aws_s3_bucket_public_access_block" "example-public-access-block" { bucket = aws_s3_bucket.example.id block_public_acls = false # should be true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } Compliant SolutionThis resource "aws_s3_bucket" "example" { bucket = "example" } resource "aws_s3_bucket_public_access_block" "example-public-access-block" { bucket = aws_s3_bucket.example.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } See
|
||||||||||||
terraform:S6321 |
Why is this an issue?Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and
outbound traffic. What is the potential impact?Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system. Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system. How to fix itIt is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers. Code examplesNoncompliant code exampleAn ingress rule allowing all inbound SSH traffic for AWS: resource "aws_security_group" "noncompliant" { name = "allow_ssh_noncompliant" description = "allow_ssh_noncompliant" vpc_id = aws_vpc.main.id ingress { description = "SSH rule" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] # Noncompliant } } A security rule allowing all inbound SSH traffic for Azure: resource "azurerm_network_security_rule" "noncompliant" { priority = 100 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "22" source_address_prefix = "*" # Noncompliant destination_address_prefix = "*" } A firewall rule allowing all inbound SSH traffic for GCP: resource "google_compute_firewall" "noncompliant" { network = google_compute_network.default.name allow { protocol = "tcp" ports = ["22"] } source_ranges = ["0.0.0.0/0"] # Noncompliant } Compliant solutionAn ingress rule allowing inbound SSH traffic from specific IP addresses for AWS: resource "aws_security_group" "compliant" { name = "allow_ssh_compliant" description = "allow_ssh_compliant" vpc_id = aws_vpc.main.id ingress { description = "SSH rule" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["1.2.3.0/24"] } } A security rule allowing inbound SSH traffic from specific IP addresses for Azure: resource "azurerm_network_security_rule" "compliant" { priority = 100 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "22" source_address_prefix = "1.2.3.0" destination_address_prefix = "*" } A firewall rule allowing inbound SSH traffic from specific IP addresses for GCP: resource "google_compute_firewall" "compliant" { network = google_compute_network.default.name allow { protocol = "tcp" ports = ["22"] } source_ranges = ["10.0.0.1/32"] } ResourcesDocumentation
Standards |
||||||||||||
terraform:S6364 |
Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident. Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident. Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIncrease the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident. Sensitive Code ExampleFor Amazon Relational Database Service clusters and instances: resource "aws_db_instance" "example" { backup_retention_period = 2 # Sensitive } For Azure Cosmos DB accounts: resource "azurerm_cosmosdb_account" "example" { backup { type = "Periodic" retention_in_hours = 8 # Sensitive } } Compliant SolutionFor Amazon Relational Database Service clusters and instances: resource "aws_db_instance" "example" { backup_retention_period = 5 } For Azure Cosmos DB accounts: resource "azurerm_cosmosdb_account" "example" { backup { type = "Periodic" retention_in_hours = 300 } } |
||||||||||||
terraform:S6401 |
The likelihood of security incidents increases when cryptographic keys are used for a long time. Thus, to strengthen the data protection it’s recommended to rotate the symmetric keys created with the Google Cloud Key Management Service (KMS) automatically and periodically. Note that it’s not possible in GCP KMS to rotate asymmetric keys automatically. Ask Yourself Whether
Recommended Secure Coding PracticesIt’s recommended to rotate keys automatically and regularly. The shorter the key period, the less data can be decrypted by an attacker if a key is compromised. So the key rotation period usually depends on the amount of data encrypted with a key or other requirements such as compliance with security standards. In general, a period of time of 90 days can be used. Sensitive Code Exampleresource "google_kms_crypto_key" "noncompliant-key" { # Sensitive: no rotation period is defined name = "example" key_ring = google_kms_key_ring.keyring.id } Compliant Solutionresource "google_kms_crypto_key" "compliant-key" { name = "example" key_ring = google_kms_key_ring.keyring.id rotation_period = "7776000s" # 90 days } See
|
||||||||||||
terraform:S6402 |
Domain Name Systems (DNS) are vulnerable by default to various types of attacks. One of the biggest risks is DNS cache poisoning, which occurs when a DNS accepts spoofed DNS data, caches the malicious records, and potentially sends them later in response to legitimate DNS request lookups. This attack typically relies on the attacker’s MITM ability on the network and can be used to redirect users from an intended website to a malicious website. To prevent these vulnerabilities, Domain Name System Security Extensions (DNSSEC) ensure the integrity and authenticity of DNS data by digitally signing DNS zones. The public key of a DNS zone used to validate signatures can be trusted as DNSSEC is based on the following chain of trust:
Ask Yourself WhetherThe parent DNS zone (likely managed by the DNS registrar of the domain name) supports DNSSEC and
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to use DNSSEC when creating private and public DNS zones. Private DNS zones cannot be queried on the Internet and provide DNS name resolution for private networks. The risk of MITM attacks might be considered low on these networks and therefore implementing DNSSEC is still recommended but not with a high priority. Note: Choose a robust signing algorithm when setting up DNSSEC, such as Sensitive Code Exampleresource "google_dns_managed_zone" "example" { # Sensitive: dnssec_config is missing name = "foobar" dns_name = "foo.bar." } Compliant Solutionresource "google_dns_managed_zone" "example" { name = "foobar" dns_name = "foo.bar." dnssec_config { default_key_specs { algorithm = "rsasha256" } } } See
|
||||||||||||
terraform:S6407 |
App Engine supports encryption in transit through TLS. As soon as the app is deployed, it can be requested using When creating an App Engine, request handlers can be set with different security level for encryption:
Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended for App Engine handlers to require TLS for all traffic. It can be achieved by setting the security level to
Sensitive Code Example
resource "google_app_engine_standard_app_version" "example" { version_id = "v1" service = "default" runtime = "nodejs" handlers { url_regex = ".*" redirect_http_response_code = "REDIRECT_HTTP_RESPONSE_CODE_301" security_level = "SECURE_OPTIONAL" # Sensitive script { script_path = "auto" } } } Compliant SolutionForce the use of TLS for the handler by setting the security level on resource "google_app_engine_standard_app_version" "example" { version_id = "v1" service = "default" runtime = "nodejs" handlers { url_regex = ".*" redirect_http_response_code = "REDIRECT_HTTP_RESPONSE_CODE_301" security_level = "SECURE_ALWAYS" script { script_path = "auto" } } } See
|
||||||||||||
terraform:S6408 |
Creating custom roles that allow privilege escalation can allow attackers to maliciously exploit an organization’s cloud resources. Certain GCP permissions allow impersonation of one or more privileged principals within a GCP infrastructure. For example, privileges like After gaining a foothold in the target infrastructure, sophisticated attackers typically map their newfound roles to understand what is exploitable. The riskiest privileges are either:
In either case, the following privileges should be avoided or granted only with caution:
Ask Yourself Whether
There is a risk if you answered no to these questions. Recommended Secure Coding PracticesUse a permission that does not allow privilege escalation. Sensitive Code ExampleLightweight custom role intended for a developer: resource "google_organization_iam_custom_role" "example" { permissions = [ "iam.serviceAccounts.getAccessToken", # Sensitive "iam.serviceAccounts.getOpenIdToken", # Sensitive "iam.serviceAccounts.actAs", # Sensitive "iam.serviceAccounts.implicitDelegation", # Sensitive "resourcemanager.projects.get", "resourcemanager.projects.list", "run.services.create", "run.services.delete", "run.services.get", "run.services.getIamPolicy", "run.services.list", "run.services.update", ] } Lightweight custom role intended for a read-only user: resource "google_project_iam_custom_role" "example" { permissions = [ "iam.serviceAccountKeys.create", # Sensitive "iam.serviceAccountKeys.get", # Sensitive "deploymentmanager.deployments.create", # Sensitive "cloudbuild.builds.create", # Sensitive "resourcemanager.projects.get", "resourcemanager.projects.list", "run.services.get", "run.services.getIamPolicy", "run.services.list", ] } Compliant SolutionLightweight custom role intended for a developer: resource "google_project_iam_custom_role" "example" { permissions = [ "resourcemanager.projects.get", "resourcemanager.projects.list", "run.services.create", "run.services.delete", "run.services.get", "run.services.getIamPolicy", "run.services.list", "run.services.update", ] } Lightweight custom role intended for a read-only user: resource "google_project_iam_custom_role" "example" { permissions = [ "resourcemanager.projects.get", "resourcemanager.projects.list", "run.services.get", "run.services.getIamPolicy", "run.services.list", ] } See
|
||||||||||||
terraform:S6409 |
Enabling Legacy Authorization, Attribute-Based Access Control (ABAC), on Google Kubernetes Engine resources can reduce an organization’s ability to protect itself against access controls being compromised. For Kubernetes, Attribute-Based Access Control has been superseded by Role-Based Access Control. ABAC is not under active development anymore and thus should be avoided. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesUnless you are relying on ABAC, leave it disabled. Sensitive Code Exampleresource "google_container_cluster" "example" { enable_legacy_abac = true # Sensitive } Compliant Solutionresource "google_container_cluster" "example" { enable_legacy_abac = false } See
|
||||||||||||
terraform:S6414 |
The Google Cloud audit logs service records administrative activities and accesses to Google Cloud resources of the project. It is important to enable audit logs to be able to investigate malicious activities in the event of a security incident. Some project members may be exempted from having their activities recorded in the Google Cloud audit log service, creating a blind spot and reducing the capacity to investigate future security events. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to have a consistent audit logging policy for all project members and therefore not to create logging exemptions for certain members. Sensitive Code Exampleresource "google_project_iam_audit_config" "example" { project = data.google_project.project.id service = "allServices" audit_log_config { log_type = "ADMIN_READ" exempted_members = [ # Sensitive "user:rogue.administrator@gmail.com", ] } } Compliant Solutionresource "google_project_iam_audit_config" "example" { project = data.google_project.project.id service = "allServices" audit_log_config { log_type = "ADMIN_READ" } } See
|
||||||||||||
terraform:S6252 |
S3 buckets can be in three states related to versioning:
When the S3 bucket is unversioned or has versioning suspended it means that a new version of an object overwrites an existing one in the S3 bucket. It can lead to unintentional or intentional information loss. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object. Sensitive Code ExampleVersioning is disabled by default: resource "aws_s3_bucket" "example" { # Sensitive bucket = "example" } Compliant SolutionVersioning is enabled for AWS provider version 4 or above: resource "aws_s3_bucket" "example" { bucket = "example" } resource "aws_s3_bucket_versioning" "example-versioning" { bucket = aws_s3_bucket.example.id versioning_configuration { status = "Enabled" } } Versioning is enabled for AWS provider version 3 or below: resource "aws_s3_bucket" "example" { bucket = "example" versioning { enabled = true } } See
|
||||||||||||
terraform:S6258 |
Disabling logging of this component can lead to missing traceability in case of a security incident. Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions. Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesEnable the logging capabilities of this component. Depending on the component, new permissions might be required by the logging storage
components. Sensitive Code ExampleFor Amazon S3 access requests: resource "aws_s3_bucket" "example" { # Sensitive bucket = "example" } For Amazon API Gateway stages: resource "aws_api_gateway_stage" "example" { # Sensitive xray_tracing_enabled = false # Sensitive } For Amazon MSK Broker logs: resource "aws_msk_cluster" "example" { cluster_name = "example" kafka_version = "2.7.1" number_of_broker_nodes = 3 logging_info { broker_logs { # Sensitive firehose { enabled = false } s3 { enabled = false } } } } For Amazon MQ Brokers: resource "aws_mq_broker" "example" { logs { # Sensitive audit = false general = false } } For Amazon Amazon DocumentDB: resource "aws_docdb_cluster" "example" { # Sensitive cluster_identifier = "example" } For Azure App Services: resource "azurerm_app_service" "example" { logs { application_logs { file_system_level = "Off" # Sensitive azure_blob_storage { level = "Off" # Sensitive } } } } For GCP VPC Subnetwork: resource "google_compute_subnetwork" "example" { # Sensitive name = "example" ip_cidr_range = "10.2.0.0/16" region = "us-central1" network = google_compute_network.example.id } For GCP SQL Database Instance: resource "google_sql_database_instance" "example" { name = "example" settings { # Sensitive tier = "db-f1-micro" ip_configuration { require_ssl = true ipv4_enabled = true } } } For GCP Kubernetes Engine (GKE) cluster: resource "google_container_cluster" "example" { name = "example" logging_service = "none" # Sensitive } Compliant SolutionFor Amazon S3 access requests: resource "aws_s3_bucket" "example-logs" { bucket = "example_logstorage" acl = "log-delivery-write" } resource "aws_s3_bucket" "example" { bucket = "example" logging { # AWS provider <= 3 target_bucket = aws_s3_bucket.example-logs.id target_prefix = "log/example" } } resource "aws_s3_bucket_logging" "example" { # AWS provider >= 4 bucket = aws_s3_bucket.example.id target_bucket = aws_s3_bucket.example-logs.id target_prefix = "log/example" } For Amazon API Gateway stages: resource "aws_api_gateway_stage" "example" { xray_tracing_enabled = true access_log_settings { destination_arn = "arn:aws:logs:eu-west-1:123456789:example" format = "..." } } For Amazon MSK Broker logs: resource "aws_msk_cluster" "example" { cluster_name = "example" kafka_version = "2.7.1" number_of_broker_nodes = 3 logging_info { broker_logs { firehose { enabled = false } s3 { enabled = true bucket = "example" prefix = "log/msk-" } } } } For Amazon MQ Brokers, enable
resource "aws_mq_broker" "example" { logs { audit = true general = true } } For Amazon Amazon DocumentDB: resource "aws_docdb_cluster" "example" { cluster_identifier = "example" enabled_cloudwatch_logs_exports = ["audit"] } For Azure App Services: resource "azurerm_app_service" "example" { logs { http_logs { file_system { retention_in_days = 90 retention_in_mb = 100 } } application_logs { file_system_level = "Error" azure_blob_storage { retention_in_days = 90 level = "Error" } } } } For GCP VPC Subnetwork: resource "google_compute_subnetwork" "example" { name = "example" ip_cidr_range = "10.2.0.0/16" region = "us-central1" network = google_compute_network.example.id log_config { aggregation_interval = "INTERVAL_10_MIN" flow_sampling = 0.5 metadata = "INCLUDE_ALL_METADATA" } } For GCP SQL Database Instance: resource "google_sql_database_instance" "example" { name = "example" settings { ip_configuration { require_ssl = true ipv4_enabled = true } database_flags { name = "log_connections" value = "on" } database_flags { name = "log_disconnections" value = "on" } database_flags { name = "log_checkpoints" value = "on" } database_flags { name = "log_lock_waits" value = "on" } } } For GCP Kubernetes Engine (GKE) cluster: resource "google_container_cluster" "example" { name = "example" logging_service = "logging.googleapis.com/kubernetes" } See
|
||||||||||||
terraform:S6330 |
Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary. Sensitive Code ExampleFor aws_sqs_queue: resource "aws_sqs_queue" "queue" { # Sensitive, encryption disabled by default name = "sqs-unencrypted" } Compliant SolutionFor aws_sqs_queue: resource "aws_sqs_queue" "queue" { name = "sqs-encrypted" kms_master_key_id = aws_kms_key.enc_key.key_id } See
|
||||||||||||
terraform:S6333 |
Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure. Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIn general, prefer limiting API access to a specific set of people or entities. AWS provides multiple methods to do so:
Sensitive Code ExampleA public API that doesn’t have access control implemented: resource "aws_api_gateway_method" "noncompliantapi" { authorization = "NONE" # Sensitive http_method = "GET" } Compliant SolutionAn API that implements AWS IAM permissions: resource "aws_api_gateway_method" "compliantapi" { authorization = "AWS_IAM" http_method = "GET" } See
|
||||||||||||
terraform:S6378 |
Disabling Managed Identities can reduce an organization’s ability to protect itself against configuration faults and credential leaks. Authenticating via managed identities to an Azure resource solely relies on an API call with a non-secret token. The process is inner to Azure: secrets used by Azure are not even accessible to end-users. In typical scenarios without managed identities, the use of credentials can lead to mistakenly leaving them in code bases. In addition, configuration faults may also happen when storing these values or assigning them permissions. By transparently taking care of the Azure Active Directory authentication, Managed Identities allow getting rid of day-to-day credentials management. Ask Yourself WhetherThe resource:
There is a risk if you answered yes to all of those questions. Recommended Secure Coding PracticesEnable the Managed Identities capabilities of this Azure resource. If supported, use a System-Assigned managed identity, as:
Alternatively, User-Assigned Managed Identities can also be used but don’t guarantee the properties listed above. Sensitive Code ExampleFor Typical identity blocks: resource "azurerm_api_management" "example" { # Sensitive, the identity block is missing name = "example" publisher_name = "company" } For connections between Kusto Clusters and Azure Data Factory: resource "azurerm_data_factory_linked_service_kusto" "example" { name = "example" use_managed_identity = false # Sensitive } Compliant SolutionFor Typical identity blocks: resource "azurerm_api_management" "example" { name = "example" publisher_name = "company" identity { type = "SystemAssigned" } } For connections between Kusto Clusters and Azure Data Factory: resource "azurerm_data_factory_linked_service_kusto" "example" { name = "example" use_managed_identity = true } See |
||||||||||||
terraform:S6379 |
Enabling Azure resource-specific admin accounts can reduce an organization’s ability to protect itself against account or service account thefts. Full Administrator permissions fail to correctly separate duties and create potentially critical attack vectors on the impacted resources. In case of abuse of elevated permissions, both the data on which impacted resources operate and their access traceability are at risk. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDisable the administrative accounts or permissions in this Azure resource. Sensitive Code ExampleFor Azure Batch Pools: resource "azurerm_batch_pool" "example" { name = "sensitive" start_task { user_identity { auto_user { elevation_level = "Admin" # Sensitive scope = "Task" } } } } For Azure Container Registries: resource "azurerm_container_registry" "example" { name = "example" admin_enabled = true # Sensitive } Compliant SolutionFor Azure Batch Pools: resource "azurerm_batch_pool" "example" { name = "example" start_task { user_identity { auto_user { elevation_level = "NonAdmin" scope = "Task" } } } } For Azure Container Registries: resource "azurerm_container_registry" "exemple" { name = "example" admin_enabled = false } See |
||||||||||||
terraform:S6410 |
The TLS configuration of Google Cloud load balancers is defined through SSL policies. Why is this an issue?There are three managed profiles to choose from:
The What is the potential impact?An attacker may be able to force the use of the insecure cryptographic algorithms, downgrading the security of the connection. This allows them to compromise the confidentiality or integrity of the data being transmitted. The The How to fix itCode examplesNoncompliant code exampleresource "google_compute_ssl_policy" "example" { name = "example" min_tls_version = "TLS_1_2" profile = "COMPATIBLE" # Noncompliant } Compliant solutionresource "google_compute_ssl_policy" "example" { name = "example" min_tls_version = "TLS_1_2" profile = "RESTRICTED" } How does this work?If an attacker is able to intercept and modify network traffic, they can filter the list of algorithms sent between the client and the server. By removing all secure algorithms from the list, the attacker can force the use of any insecure algorithms that remain. The PitfallsOlder client applications may not support the algorithms required by the If the ResourcesStandardsExternal coding guidelines
|
||||||||||||
terraform:S6412 |
When object versioning for Google Cloud Storage (GCS) buckets is enabled, different versions of an object are stored in the bucket, preventing accidental deletion. A specific version can always be deleted when the generation number of an object version is specified in the request. Object versioning cannot be enabled on a bucket with a retention policy. A retention policy ensures that an object is retained for a specific period of time even if a request is made to delete or replace it. Thus, a retention policy locks the single current version of an object in the bucket, which differs from object versioning where different versions of an object are retained. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to enable GCS bucket versioning and thus to have the possibility to retrieve and restore different versions of an object. Sensitive Code ExampleVersioning is disabled by default: resource "google_storage_bucket" "example" { # Sensitive name = "example" location = "US" } Compliant SolutionVersioning is enabled: resource "google_storage_bucket" "example" { name = "example" location = "US" versioning { enabled = "true" } } See
|
||||||||||||
terraform:S6413 |
Defining a short log retention duration can reduce an organization’s ability to backtrace the actions of malicious actors in case of a security incident. Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions. Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIncrease the log retention period to an amount of time sufficient enough to be able to investigate and restore service in case of an incident. Sensitive Code ExampleFor AWS Cloudwatch Logs: resource "aws_cloudwatch_log_group" "example" { name = "example" retention_in_days = 3 # Sensitive } resource "azurerm_firewall_policy" "example" { insights { enabled = true retention_in_days = 7 # Sensitive } } For Google Cloud Logging buckets: resource "google_logging_project_bucket_config" "example" { project = var.project location = "global" retention_days = 7 # Sensitive bucket_id = "_Default" } Compliant SolutionFor AWS Cloudwatch Logs: resource "aws_cloudwatch_log_group" "example" { name = "example" retention_in_days = 30 } resource "azurerm_firewall_policy" "example" { insights { enabled = true retention_in_days = 30 } } For Google Cloud Logging buckets: resource "google_logging_project_bucket_config" "example" { project = var.project location = "global" retention_days = 30 bucket_id = "_Default" } |
||||||||||||
terraform:S6255 |
When S3 buckets versioning is enabled it’s possible to add an additional authentication factor before being allowed to delete versions of an object or changing the versioning state of a bucket. It prevents accidental object deletion by forcing the user sending the delete request to prove that he has a valid MFA device and a corresponding valid token. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enable S3 MFA delete, note that:
Sensitive Code ExampleA versioned S3 bucket does not have MFA delete enabled for AWS provider version 3 or below: resource "aws_s3_bucket" "example" { # Sensitive bucket = "example" versioning { enabled = true } } A versioned S3 bucket does not have MFA delete enabled for AWS provider version 4 or above: resource "aws_s3_bucket" "example" { bucket = "example" } resource "aws_s3_bucket_versioning" "example" { # Sensitive bucket = aws_s3_bucket.example.id versioning_configuration { status = "Enabled" } } Compliant SolutionMFA delete is enabled for AWS provider version 3 or below: resource "aws_s3_bucket" "example" { bucket = "example" versioning { enabled = true mfa_delete = true } } MFA delete is enabled for AWS provider version 4 or above: resource "aws_s3_bucket" "example" { bucket = "example" } resource "aws_s3_bucket_versioning" "example" { bucket = aws_s3_bucket.example.id versioning_configuration { status = "Enabled" mfa_delete = "Enabled" } mfa = "${var.MFA}" } See
|
||||||||||||
terraform:S6332 |
Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary. Sensitive Code ExampleFor aws_efs_file_system: resource "aws_efs_file_system" "fs" { # Sensitive, encryption disabled by default } Compliant SolutionFor aws_efs_file_system: resource "aws_efs_file_system" "fs" { encrypted = true } See
|
||||||||||||
terraform:S6375 |
Azure Active Directory offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users. An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner. This rule raises an issue when one of the following roles is assigned:
Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding Practices
Sensitive Code Exampleresource "azuread_directory_role" "example" { display_name = "Privileged Role Administrator" # Sensitive } resource "azuread_directory_role_member" "example" { role_object_id = azuread_directory_role.example.object_id member_object_id = data.azuread_user.example.object_id } Compliant Solutionresource "azuread_directory_role" "example" { display_name = "Usage Summary Reports Reader" } resource "azuread_directory_role_member" "example" { role_object_id = azuread_directory_role.example.object_id member_object_id = data.azuread_user.example.object_id } See
|
||||||||||||
php:S2115 |
When accessing a database, an empty password should be avoided as it introduces a weakness. Why is this an issue?When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials. What is the potential impact?Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains. Unauthorized Access to Sensitive DataWhen a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage. Compromise of System IntegrityWithout a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks. Unwanted Modifications or DeletionsThe absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences. Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm. How to fix it in Core PHPCode examplesThe following code uses an empty password to connect to a MySQL database. The vulnerability can be fixed by using a strong password retrieved from an environment variable Noncompliant code example$conn = new mysqli($servername, $username, ""); // Noncompliant Compliant solution$password = getenv('MYSQL_SECURE_PASSWORD'); $conn = new mysqli($servername, $username, $password); PitfallsHard-coded passwordsIt could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:
To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase. ResourcesStandards |
||||||||||||
php:S4502 |
A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application. The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFor Laravel VerifyCsrfToken middleware use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware; class VerifyCsrfToken extends Middleware { protected $except = [ 'api/*' ]; // Sensitive; disable CSRF protection for a list of routes } For Symfony Forms use Symfony\Bundle\FrameworkBundle\Controller\AbstractController; class Controller extends AbstractController { public function action() { $this->createForm('', null, [ 'csrf_protection' => false, // Sensitive; disable CSRF protection for a single form ]); } } Compliant SolutionFor Laravel VerifyCsrfToken middleware use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware; class VerifyCsrfToken extends Middleware { protected $except = []; // Compliant } Remember to add @csrf blade directive to the relevant forms when removing an element from $except. Otherwise the form submission will stop working. For Symfony Forms use Symfony\Bundle\FrameworkBundle\Controller\AbstractController; class Controller extends AbstractController { public function action() { $this->createForm('', null, []); // Compliant; CSRF protection is enabled by default } } See |
||||||||||||
php:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on production servers or applications distributed to end users. Sensitive Code ExampleCakePHP 1.x, 2.x: Configure::write('debug', 1); // Sensitive: development mode or Configure::write('debug', 2); // Sensitive: development mode or Configure::write('debug', 3); // Sensitive: development mode CakePHP 3.0: use Cake\Core\Configure; Configure::config('debug', true); // Sensitive: development mode WordPress: define( 'WP_DEBUG', true ); // Sensitive: development mode Compliant SolutionCakePHP 1.2: Configure::write('debug', 0); // Compliant; this is the production mode CakePHP 3.0: use Cake\Core\Configure; Configure::config('debug', false); // Compliant: "0" or "false" for CakePHP 3.x is suitable (production mode) to not leak sensitive data on the logs. WordPress: define( 'WP_DEBUG', false ); // Compliant See |
||||||||||||
php:S4508 |
This rule is deprecated, and will eventually be removed. Deserializing objects is security-sensitive. For example, it has led in the past to the following vulnerabilities:
Object deserialization from an untrusted source can lead to unexpected code execution. Deserialization takes a stream of bits and turns it into an
object. If the stream contains the type of object you expect, all is well. But if you’re deserializing data coming from untrusted input, and an
attacker has inserted some other type of object, you’re in trouble. Why? A known attack
scenario involves the creation of a serialized PHP object with crafted attributes which will modify your application’s behavior. This attack
relies on PHP magic methods like Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding PracticesTo prevent insecure deserialization, it is recommended to:
See
|
||||||||||||
php:S5042 |
Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes). Ask Yourself WhetherArchives to expand are untrusted and:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFor ZipArchive module: $zip = new ZipArchive(); if ($zip->open($file) === true) { $zip->extractTo('.'); // Sensitive $zip->close(); } For Zip module: $zip = zip_open($file); while ($file = zip_read($zip)) { $filename = zip_entry_name($file); $size = zip_entry_filesize($file); if (substr($filename, -1) !== '/') { $content = zip_entry_read($file, zip_entry_filesize($file)); // Sensitive - zip_entry_read() uses zip_entry_filesize() file_put_contents($filename, $content); } else { mkdir($filename); } } zip_close($zip); Compliant SolutionFor ZipArchive module: define('MAX_FILES', 10000); define('MAX_SIZE', 1000000000); // 1 GB define('MAX_RATIO', 10); define('READ_LENGTH', 1024); $fileCount = 0; $totalSize = 0; $zip = new ZipArchive(); if ($zip->open($file) === true) { for ($i = 0; $i < $zip->numFiles; $i++) { $filename = $zip->getNameIndex($i); $stats = $zip->statIndex($i); if (strpos($filename, '../') !== false || substr($filename, 0, 1) === '/') { throw new Exception(); } if (substr($filename, -1) !== '/') { $fileCount++; if ($fileCount > MAX_FILES) { // Reached max. number of files throw new Exception(); } $fp = $zip->getStream($filename); // Compliant $currentSize = 0; while (!feof($fp)) { $currentSize += READ_LENGTH; $totalSize += READ_LENGTH; if ($totalSize > MAX_SIZE) { // Reached max. size throw new Exception(); } // Additional protection: check compression ratio if ($stats['comp_size'] > 0) { $ratio = $currentSize / $stats['comp_size']; if ($ratio > MAX_RATIO) { // Reached max. compression ratio throw new Exception(); } } file_put_contents($filename, fread($fp, READ_LENGTH), FILE_APPEND); } fclose($fp); } else { mkdir($filename); } } $zip->close(); } For Zip module: define('MAX_FILES', 10000); define('MAX_SIZE', 1000000000); // 1 GB define('MAX_RATIO', 10); define('READ_LENGTH', 1024); $fileCount = 0; $totalSize = 0; $zip = zip_open($file); while ($file = zip_read($zip)) { $filename = zip_entry_name($file); if (strpos($filename, '../') !== false || substr($filename, 0, 1) === '/') { throw new Exception(); } if (substr($filename, -1) !== '/') { $fileCount++; if ($fileCount > MAX_FILES) { // Reached max. number of files throw new Exception(); } $currentSize = 0; while ($data = zip_entry_read($file, READ_LENGTH)) { // Compliant $currentSize += READ_LENGTH; $totalSize += READ_LENGTH; if ($totalSize > MAX_SIZE) { // Reached max. size throw new Exception(); } // Additional protection: check compression ratio if (zip_entry_compressedsize($file) > 0) { $ratio = $currentSize / zip_entry_compressedsize($file); if ($ratio > MAX_RATIO) { // Reached max. compression ratio throw new Exception(); } } file_put_contents($filename, $data, FILE_APPEND); } } else { mkdir($filename); } } zip_close($zip); See
|
||||||||||||
php:S2277 |
This rule is deprecated; use S5542 instead. Why is this an issue?Without OAEP in RSA encryption, it takes less work for an attacker to decrypt the data or infer patterns from the ciphertext. This rule logs an
issue when Noncompliant code examplefunction encrypt($data, $key) { $crypted=''; openssl_public_encrypt($data, $crypted, $key, OPENSSL_NO_PADDING); // Noncompliant return $crypted; } Compliant solutionfunction encrypt($data, $key) { $crypted=''; openssl_public_encrypt($data, $crypted, $key, OPENSSL_PKCS1_OAEP_PADDING); return $crypted; } Resources
|
||||||||||||
php:S2278 |
This rule is deprecated; use S5547 instead. Why is this an issue?According to the US National Institute of Standards and Technology (NIST), the Data Encryption Standard (DES) is no longer considered secure:
For similar reasons, RC2 should also be avoided. Noncompliant code example<?php $ciphertext = mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, $mode); // Noncompliant // ... $ciphertext = mcrypt_encrypt(MCRYPT_DES_COMPAT, $key, $plaintext, $mode); // Noncompliant // ... $ciphertext = mcrypt_encrypt(MCRYPT_TRIPLEDES, $key, $plaintext, $mode); // Noncompliant // ... $ciphertext = mcrypt_encrypt(MCRYPT_3DES, $key, $plaintext, $mode); // Noncompliant $cipher = "des-ede3-cfb"; // Noncompliant $ciphertext_raw = openssl_encrypt($plaintext, $cipher, $key, $options=OPENSSL_RAW_DATA, $iv); ?> Compliant solution<?php $ciphertext = mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $plaintext, MCRYPT_MODE_CBC, $iv); ?> Resources
|
||||||||||||
php:S3336 |
PHP session tokens are normally transmitted through HTTP cookies. However, for clients that do not support cookies and when the PHP
Why is this an issue?GET URL parameter can be disclosed in a variety of ways:
What is the potential impact?Attackers with access to any of those disclosure locations will be able to see and steal a victim’s session token. They can then use it to log in as the user, impersonate their account, and take advantage of their privileges. Such an attack can be more or less severe depending on the victim’s privileges. Common security impacts range from data theft to application takeover. Data theftAttackers with access to a compromised account will be able to disclose any information stored on it. This includes the Personally Identifiable Information (PII) of the user. The confidentiality of PII is a requirement from national security regulatory authorities in most countries. Insufficiently protecting this data could have legal consequences and lead to fines or other prosecutions. Application takeoverAttackers compromise the account of a high-privileged user could modify internal web application logic, disrupt workflows, or change other application’s settings in a way that will give them full control over it. Such an attack would lead to reputational damages and financial and legal consequences. How to fix itCode examplesNoncompliant code example; php.ini session.use_trans_sid=1 ; Noncompliant Compliant solution; php.ini session.use_trans_sid=0 How does this work?The compliant code example disables the Note that this parameter is off by default. ResourcesStandards |
||||||||||||
php:S5542 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext. Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution. For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in McryptCode examplesNoncompliant code exampleExample with a symmetric cipher, AES: mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, "ecb"); // Noncompliant Compliant solutionMcrypt is deprecated and should not be used. You can use Sodium instead. For the AES symmetric cipher, use the GCM mode: sodium_crypto_aead_aes256gcm_encrypt($plaintext, '', $nonce, $key); How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. Appropriate choices are currently the following. For AES: use authenticated encryption modesThe best-known authenticated encryption mode for AES is Galois/Counter mode (GCM). GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data. Other similar modes are:
It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead. For RSA: use the OAEP schemeThe Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA. ResourcesArticles & blog posts
Standards |
||||||||||||
php:S5547 |
This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in McryptCode examplesThe following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided. Noncompliant code examplemcrypt_encrypt(MCRYPT_DES, $key, $plaintext, $mode); // Noncompliant Compliant solutionMcrypt is deprecated and should not be used. You can use Sodium instead. sodium_crypto_aead_aes256gcm_encrypt($plaintext, '', $nonce, $key); How does this work?Use a secure algorithmIt is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES). For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits. ResourcesStandards |
||||||||||||
php:S2245 |
Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities: When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information. As the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example$random = rand(); $random2 = mt_rand(0, 99); Compliant Solution$randomInt = random_int(0,99); // Compliant; generates a cryptographically secure random integer See
|
||||||||||||
php:S3334 |
File access functions in PHP are typically used to open local files. They are also capable of reading files from remote servers using protocols such as HTTP, HTTPS and FTP. This behavior is controlled by the Why is this an issue?Most applications do not require or expect the file access functions to download remotely accessible files. However, attackers can abuse these remote file access features while exploiting other vulnerabilities, such as path traversal issues. What is the potential impact?While activating these settings does not pose a direct threat to the application’s security, they can make the exploitation of other vulnerabilities easier and more severe. If an attacker can control a file location while If How to fix it
Code examplesNoncompliant code example; php.ini Noncompliant; allow_url_fopen is enabled by default allow_url_include=1 ; Noncompliant Compliant solution; php.ini allow_url_fopen=0 allow_url_include=0 ResourcesStandards |
||||||||||||
php:S3335 |
The When disabled, CGI scripts can be requested directly. Why is this an issue?Pre-processing on the server side is often required to check users authentication when working in CGI mode. Those preliminary actions can also position diverse configuration parameters necessary for the CGI script to work correctly. What is the potential impact?CGI scripts might behave unexpectedly if the proper configuration is not set up before they are accessed. Most serious security-related consequences will affect the authorization and authentication mechanisms of the application. When the web server is responsible for authenticating clients and forwarding the proper identity to the script, direct access will bypass this authentication step. Attackers could also provide arbitrary identities to the CGI script by forging specific HTTP headers or parameters. They could then impersonate any legitimate user of the application. How to fix it
Note that this parameter is enabled by default. Code examplesNoncompliant code example; php.ini cgi.force_redirect=0 ; Noncompliant Compliant solution; php.ini cgi.force_redirect=1 ; Noncompliant PitfallsThe While using such a server, the ResourcesStandards |
||||||||||||
php:S3337 |
The Why is this an issue?When dynamic loading is enabled, PHP code can load arbitrary PHP extensions by calling the PHP defaults to allowing dynamic loading. How to fix it
Code examplesNoncompliant code example; php.ini enable_dl=1 ; Noncompliant Compliant solution; php.ini enable_dl=0 ResourcesStandards |
||||||||||||
php:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Core PHPCode examplesNoncompliant code example$opts = array( 'ssl' => [ 'crypto_method' => STREAM_CRYPTO_METHOD_TLSv1_1_CLIENT // Noncompliant ], 'http'=>array( 'method'=>"GET" ) ); $context = stream_context_create($opts); $fp = fopen('https://www.example.com', 'r', false, $context); fpassthru($fp); fclose($fp); Compliant solution$opts = array( 'ssl' => [ 'crypto_method' => STREAM_CRYPTO_METHOD_TLSv1_2_CLIENT ], 'http'=>array( 'method'=>"GET" ) ); $context = stream_context_create($opts); $fp = fopen('https://www.example.com', 'r', false, $context); fpassthru($fp); fclose($fp); How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards
|
||||||||||||
php:S4426 |
This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms. Note that depending on the algorithm, the term key refers to a different mathematical property. For example:
If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext. In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Core PHPCode examplesNoncompliant code exampleHere is an example of a private key generation with RSA: $config = [ "digest_alg" => "sha512", "private_key_bits" => 1024, // Noncompliant "private_key_type" => OPENSSL_KEYTYPE_RSA, ]; $res = openssl_pkey_new($config); Compliant solution$config = [ "digest_alg" => "sha512", "private_key_bits" => 2048, "private_key_type" => OPENSSL_KEYTYPE_RSA, ]; $res = openssl_pkey_new($config); How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community. The appropriate choices are the following. RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem. In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible. AES (Advanced Encryption Standard)AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying
all possible keys. Currently, a minimum key size of 128 bits is recommended for AES. Elliptic Curve Cryptography (ECC)Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve
algorithms is mentioned directly in their names. For example, Currently, a minimum key size of 224 bits is recommended for EC-based algorithms. Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:
Going the extra milePre-Quantum CryptographyEncrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer. Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety. Resources
Articles & blog posts
Standards
|
||||||||||||
php:S4787 |
This rule is deprecated; use S4426, S5542, S5547 instead. Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities: Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption. This rule flags function calls that initiate encryption/decryption. Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleBuiltin functions function myEncrypt($cipher, $key, $data, $mode, $iv, $options, $padding, $infile, $outfile, $recipcerts, $headers, $nonce, $ad, $pub_key_ids, $env_keys) { mcrypt_ecb ($cipher, $key, $data, $mode); // Sensitive mcrypt_cfb($cipher, $key, $data, $mode, $iv); // Sensitive mcrypt_cbc($cipher, $key, $data, $mode, $iv); // Sensitive mcrypt_encrypt($cipher, $key, $data, $mode); // Sensitive openssl_encrypt($data, $cipher, $key, $options, $iv); // Sensitive openssl_public_encrypt($data, $crypted, $key, $padding); // Sensitive openssl_pkcs7_encrypt($infile, $outfile, $recipcerts, $headers); // Sensitive openssl_seal($data, $sealed_data, $env_keys, $pub_key_ids); // Sensitive sodium_crypto_aead_aes256gcm_encrypt ($data, $ad, $nonce, $key); // Sensitive sodium_crypto_aead_chacha20poly1305_encrypt ($data, $ad, $nonce, $key); // Sensitive sodium_crypto_aead_chacha20poly1305_ietf_encrypt ($data, $ad, $nonce, $key); // Sensitive sodium_crypto_aead_xchacha20poly1305_ietf_encrypt ($data, $ad, $nonce, $key); // Sensitive sodium_crypto_box_seal ($data, $key); // Sensitive sodium_crypto_box ($data, $nonce, $key); // Sensitive sodium_crypto_secretbox ($data, $nonce, $key); // Sensitive sodium_crypto_stream_xor ($data, $nonce, $key); // Sensitive } CakePHP use Cake\Utility\Security; function myCakeEncrypt($key, $data, $engine) { Security::encrypt($data, $key); // Sensitive // Do not use custom made engines and remember that Mcrypt is deprecated. Security::engine($engine); // Sensitive. Setting the encryption engine. } CodeIgniter class EncryptionController extends CI_Controller { public function __construct() { parent::__construct(); $this->load->library('encryption'); } public function index() { $this->encryption->create_key(16); // Sensitive. Review the key length. $this->encryption->initialize( // Sensitive. array( 'cipher' => 'aes-256', 'mode' => 'ctr', 'key' => 'the key', ) ); $this->encryption->encrypt("mysecretdata"); // Sensitive. } } CraftCMS version 3 use Craft; // This is similar to Yii as it used by CraftCMS function craftEncrypt($data, $key, $password) { Craft::$app->security->encryptByKey($data, $key); // Sensitive Craft::$app->getSecurity()->encryptByKey($data, $key); // Sensitive Craft::$app->security->encryptByPassword($data, $password); // Sensitive Craft::$app->getSecurity()->encryptByPassword($data, $password); // Sensitive } Drupal 7 - Encrypt module function drupalEncrypt() { $encrypted_text = encrypt('some string to encrypt'); // Sensitive } Joomla use Joomla\Crypt\CipherInterface; abstract class MyCipher implements CipherInterface // Sensitive. Implementing custom cipher class {} function joomlaEncrypt() { new Joomla\Crypt\Cipher_Sodium(); // Sensitive new Joomla\Crypt\Cipher_Simple(); // Sensitive new Joomla\Crypt\Cipher_Rijndael256(); // Sensitive new Joomla\Crypt\Cipher_Crypto(); // Sensitive new Joomla\Crypt\Cipher_Blowfish(); // Sensitive new Joomla\Crypt\Cipher_3DES(); // Sensitive } } Laravel use Illuminate\Support\Facades\Crypt; function myLaravelEncrypt($data) { Crypt::encryptString($data); // Sensitive Crypt::encrypt($data); // Sensitive // encrypt using the Laravel "encrypt" helper encrypt($data); // Sensitive } PHP-Encryption library use Defuse\Crypto\Crypto; use Defuse\Crypto\File; function mypPhpEncryption($data, $key, $password, $inputFilename, $outputFilename, $inputHandle, $outputHandle) { Crypto::encrypt($data, $key); // Sensitive Crypto::encryptWithPassword($data, $password); // Sensitive File::encryptFile($inputFilename, $outputFilename, $key); // Sensitive File::encryptFileWithPassword($inputFilename, $outputFilename, $password); // Sensitive File::encryptResource($inputHandle, $outputHandle, $key); // Sensitive File::encryptResourceWithPassword($inputHandle, $outputHandle, $password); // Sensitive } PhpSecLib function myphpseclib($mode) { new phpseclib\Crypt\RSA(); // Sensitive. Note: RSA can also be used for signing data. new phpseclib\Crypt\AES(); // Sensitive new phpseclib\Crypt\Rijndael(); // Sensitive new phpseclib\Crypt\Twofish(); // Sensitive new phpseclib\Crypt\Blowfish(); // Sensitive new phpseclib\Crypt\RC4(); // Sensitive new phpseclib\Crypt\RC2(); // Sensitive new phpseclib\Crypt\TripleDES(); // Sensitive new phpseclib\Crypt\DES(); // Sensitive new phpseclib\Crypt\AES($mode); // Sensitive new phpseclib\Crypt\Rijndael($mode); // Sensitive new phpseclib\Crypt\TripleDES($mode); // Sensitive new phpseclib\Crypt\DES($mode); // Sensitive } Sodium Compat library function mySodiumCompatEncrypt($data, $ad, $nonce, $key) { ParagonIE_Sodium_Compat::crypto_aead_chacha20poly1305_ietf_encrypt($data, $ad, $nonce, $key); // Sensitive ParagonIE_Sodium_Compat::crypto_aead_xchacha20poly1305_ietf_encrypt($data, $ad, $nonce, $key); // Sensitive ParagonIE_Sodium_Compat::crypto_aead_chacha20poly1305_encrypt($data, $ad, $nonce, $key); // Sensitive ParagonIE_Sodium_Compat::crypto_aead_aes256gcm_encrypt($data, $ad, $nonce, $key); // Sensitive ParagonIE_Sodium_Compat::crypto_box($data, $nonce, $key); // Sensitive ParagonIE_Sodium_Compat::crypto_secretbox($data, $nonce, $key); // Sensitive ParagonIE_Sodium_Compat::crypto_box_seal($data, $key); // Sensitive ParagonIE_Sodium_Compat::crypto_secretbox_xchacha20poly1305($data, $nonce, $key); // Sensitive } Yii version 2 use Yii; // Similar to CraftCMS as it uses Yii function YiiEncrypt($data, $key, $password) { Yii::$app->security->encryptByKey($data, $key); // Sensitive Yii::$app->getSecurity()->encryptByKey($data, $key); // Sensitive Yii::$app->security->encryptByPassword($data, $password); // Sensitive Yii::$app->getSecurity()->encryptByPassword($data, $password); // Sensitive } Zend use Zend\Crypt\FileCipher; use Zend\Crypt\PublicKey\DiffieHellman; use Zend\Crypt\PublicKey\Rsa; use Zend\Crypt\Hybrid; use Zend\Crypt\BlockCipher; function myZendEncrypt($key, $data, $prime, $options, $generator, $lib) { new FileCipher; // Sensitive. This is used to encrypt files new DiffieHellman($prime, $generator, $key); // Sensitive $rsa = Rsa::factory([ // Sensitive 'public_key' => 'public_key.pub', 'private_key' => 'private_key.pem', 'pass_phrase' => 'mypassphrase', 'binary_output' => false, ]); $rsa->encrypt($data); // No issue raised here. The configuration of the Rsa object is the line to review. $hybrid = new Hybrid(); // Sensitive BlockCipher::factory($lib, $options); // Sensitive } See
|
||||||||||||
php:S5876 |
An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled. Why is this an issue?Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:
What is the potential impact?Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following: ImpersonationOnce an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf. Data BreachIf an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes. Privilege EscalationIn some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems. How to fix it in SymfonyCode examplesIn a Symfony Security’s context, session fixation protection can be disabled with the value Session fixation protection is enabled by default in Symfony. It can be explicitly enabled with the values Noncompliant code examplenamespace Symfony\Component\DependencyInjection\Loader\Configurator; return static function (ContainerConfigurator $container) { $container->extension('security', [ 'session_fixation_strategy' => 'none', // Noncompliant ]); }; Compliant solutionnamespace Symfony\Component\DependencyInjection\Loader\Configurator; return static function (ContainerConfigurator $container) { $container->extension('security', [ 'session_fixation_strategy' => 'migrate', ]); }; How does this work?The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process. Here’s how session fixation protection typically works:
By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process. ResourcesDocumentationSecurity Configuration Reference - Session Fixation Strategy Standards |
||||||||||||
php:S3330 |
When a cookie is configured with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleIn php.ini you can specify the flags for the session cookie which is security-sensitive: session.cookie_httponly = 0; // Sensitive: this sensitive session cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability Same thing in PHP code: session_set_cookie_params($lifetime, $path, $domain, true, false); // Sensitive: this sensitive session cookie is created with the httponly flag (the fifth argument) set to false and so it can be stolen easily in case of XSS vulnerability If you create a custom security-sensitive cookie in your PHP code: $value = "sensitive data"; setcookie($name, $value, $expire, $path, $domain, true, false); // Sensitive: this sensitive cookie is created with the httponly flag (the seventh argument) set to false and so it can be stolen easily in case of XSS vulnerability By default $value = "sensitive data"; setcookie($name, $value, $expire, $path, $domain, true); // Sensitive: a sensitive cookie is created with the httponly flag (the seventh argument) not defined (by default set to false) setrawcookie($name, $value, $expire, $path, $domain, true); // Sensitive: a sensitive cookie is created with the httponly flag (the seventh argument) not defined (by default set to false) Compliant Solutionsession.cookie_httponly = 1; // Compliant: the sensitive cookie is protected against theft thanks (cookie_httponly=1) session_set_cookie_params($lifetime, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the fifth argument set to true (HttpOnly=true) $value = "sensitive data"; setcookie($name, $value, $expire, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the seventh argument set to true (HttpOnly=true) setrawcookie($name, $value, $expire, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the seventh argument set to true (HttpOnly=true) See
|
||||||||||||
php:S3332 |
This rule is deprecated, and will eventually be removed. Why is this an issue?Cookies without fixed lifetimes or expiration dates are known as non-persistent, or "session" cookies, meaning they last only as long as the browser session, and poof away when the browser closes. Cookies with expiration dates, "persistent" cookies, are stored/persisted until those dates. Non-persistent cookies should be used for the management of logged-in sessions on web sites. To make a cookie non-persistent, simply omit the
This rule raises an issue when Resources
|
||||||||||||
php:S3333 |
When accessing files on the local filesystem, PHP can enforce security checks to defend against some attacks. The Why is this an issue?The PHP runtime will allow the application to access all files underneath the configured set of directories. If no value is set, the application may access any file on the filesystem. What is the potential impact?
If an attacker can exploit a path traversal vulnerability, they will be able to access any file made available to the application’s user account. This may include system-critical or otherwise sensitive files. In shared hosting environments, a vulnerability can affect all co-hosted applications and not only the vulnerable one. How to fix itThe main PHP configuration should define the Adding the current directory, denoted by “.”, to the Code examplesNoncompliant code example; php.ini open_basedir="/:${USER}/scripts/data" ; Noncompliant; root directory in the list ; php.ini ; open_basedir= ; Noncompliant; setting commented out Compliant solution; php.ini open_basedir="${USER}/scripts/data" ; php.ini try 1 open_basedir="/var/www/myapp/data" ResourcesStandards |
||||||||||||
php:S4784 |
This rule is deprecated; use S2631 instead. Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities: Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as
Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users. This rule flags any execution of a hardcoded regular expression which has at least 3 characters and contains at at least two instances of any of
the following characters Example: The following functions are detected as executing regular expressions:
Note that This rule’s goal is to guide security code reviews. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not set the constant Check the error codes of PCRE functions via Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using. Do not run vulnerable regular expressions on user input. Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2. Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection. Avoid executing a user input string as a regular expression or use at least ExceptionsAn issue will be created for the functions The current implementation does not follow variables. It will only detect regular expressions hard-coded directly in the function call. $pattern = "/(a+)+/"; $result = eregi($pattern, $input); // No issue will be raised even if it is Sensitive Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: It is a good idea to test your regular expression if it has the same pattern on both side of a " See
|
||||||||||||
php:S2255 |
This rule is deprecated, and will eventually be removed. Using cookies is security-sensitive. It has led in the past to the following vulnerabilities: Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed. This rule flags code that writes cookies. Ask Yourself Whether
You are at risk if you answered yes to this question. Recommended Secure Coding PracticesCookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session. Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed. Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies. As a side note, every information read from a cookie should be Sanitized. Sensitive Code Example$value = "1234 1234 1234 1234"; // Review this cookie as it seems to send sensitive information (credit card number). setcookie("CreditCardNumber", $value, $expire, $path, $domain, true, true); // Sensitive setrawcookie("CreditCardNumber", $value, $expire, $path, $domain, true, true); // Sensitive See
|
||||||||||||
php:S3331 |
This rule is deprecated, and will eventually be removed. A cookie’s domain specifies which websites should be able to read it. Left blank, browsers are supposed to only send the cookie to sites that exactly match the sending domain. For example, if a cookie was set by lovely.dream.com, it should only be readable by that domain, and not by nightmare.com or even strange.dream.com. If you want to allow sub-domain access for a cookie, you can specify it by adding a dot in front of the cookie’s domain, like so: .dream.com. But cookie domains should always use at least two levels. Cookie domains can be set either programmatically or via configuration. This rule raises an issue when any cookie domain is set with a single level, as in .com. Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplesetcookie("TestCookie", $value, time()+3600, "/~path/", ".com", 1); // Noncompliant session_set_cookie_params(3600, "/~path/", ".com"); // Noncompliant // inside php.ini session.cookie_domain=".com"; // Noncompliant Compliant Solutionsetcookie("TestCookie", $value, time()+3600, "/~path/", ".myDomain.com", 1); session_set_cookie_params(3600, "/~path/", ".myDomain.com"); // inside php.ini session.cookie_domain=".myDomain.com"; See |
||||||||||||
php:S3338 |
This rule is deprecated, and will eventually be removed. Why is this an issue?
This rule raises an issue when Noncompliant code example; php.ini file_uploads=1 ; Noncompliant Compliant solution; php.ini file_uploads=0 Resources |
||||||||||||
php:S4433 |
Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:
A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials. Why is this an issue?When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory. What is the potential impact?An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores. Authentication bypassIf attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider. In such a case, all users configured in the directory might see their identity and privileges taken over. Sensitive information leakIf attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information. Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider. If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law. How to fix itCode examplesThe following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism. Noncompliant code example$ldapconn = ldap_connect("ldap.example.com"); if ($ldapconn) { $ldapbind = ldap_bind($ldapconn); // Noncompliant } Compliant solution$ldaprdn = 'uname'; $ldappass = 'password'; $ldapconn = ldap_connect("ldap.example.com"); if ($ldapconn) { $ldapbind = ldap_bind($ldapconn, $ldaprdn, $ldappass); // Compliant } ResourcesDocumentation
Standards |
||||||||||||
php:S4790 |
Cryptographic hash algorithms such as Ask Yourself WhetherThe hashed value is used in a security context like:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSafer alternatives, such as Sensitive Code Example$hash = md5($data); // Sensitive $hash = sha1($data); // Sensitive Compliant Solution// for a password $hash = password_hash($password, PASSWORD_BCRYPT); // Compliant // other context $hash = hash("sha512", $data); See
|
||||||||||||
php:S4792 |
This rule is deprecated, and will eventually be removed. Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities: Logs are useful before, during and after a security incident.
Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged. This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:
Sensitive Code ExampleBasic PHP configuration: function configure_logging() { error_reporting(E_RECOVERABLE_ERROR); // Sensitive error_reporting(32); // Sensitive ini_set('docref_root', '1'); // Sensitive ini_set('display_errors', '1'); // Sensitive ini_set('display_startup_errors', '1'); // Sensitive ini_set('error_log', "path/to/logfile"); // Sensitive - check logfile is secure ini_set('error_reporting', E_PARSE ); // Sensitive ini_set('error_reporting', 64); // Sensitive ini_set('log_errors', '0'); // Sensitive ini_set('log_errors_max_length', '512'); // Sensitive ini_set('ignore_repeated_errors', '1'); // Sensitive ini_set('ignore_repeated_source', '1'); // Sensitive ini_set('track_errors', '0'); // Sensitive ini_alter('docref_root', '1'); // Sensitive ini_alter('display_errors', '1'); // Sensitive ini_alter('display_startup_errors', '1'); // Sensitive ini_alter('error_log', "path/to/logfile"); // Sensitive - check logfile is secure ini_alter('error_reporting', E_PARSE ); // Sensitive ini_alter('error_reporting', 64); // Sensitive ini_alter('log_errors', '0'); // Sensitive ini_alter('log_errors_max_length', '512'); // Sensitive ini_alter('ignore_repeated_errors', '1'); // Sensitive ini_alter('ignore_repeated_source', '1'); // Sensitive ini_alter('track_errors', '0'); // Sensitive } Definition of custom loggers with abstract class MyLogger implements \Psr\Log\LoggerInterface { // Sensitive // ... } abstract class MyLogger2 extends \Psr\Log\AbstractLogger { // Sensitive // ... } abstract class MyLogger3 { use \Psr\Log\LoggerTrait; // Sensitive // ... } ExceptionsNo issue will be raised for logger configuration when it follows recommended settings for production servers. The following examples are all valid: ini_set('docref_root', '0'); ini_set('display_errors', '0'); ini_set('display_startup_errors', '0'); error_reporting(0); ini_set('error_reporting', 0); ini_set('log_errors', '1'); ini_set('log_errors_max_length', '0'); ini_set('ignore_repeated_errors', '0'); ini_set('ignore_repeated_source', '0'); ini_set('track_errors', '1'); See
|
||||||||||||
php:S5527 |
This vulnerability allows attackers to impersonate a trusted host. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security. When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. To do so, an attacker would obtain a valid certificate authenticating What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. How to fix it in cURLCode examplesThe following code contains examples of disabled hostname validation. The hostname validation gets disabled by setting Noncompliant code example$curl = curl_init(); curl_setopt($curl, CURLOPT_URL, 'https://example.com/'); curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0); // Noncompliant curl_exec($curl); curl_close($curl); Compliant solution$curl = curl_init(); curl_setopt($curl, CURLOPT_URL, 'https://example.com/'); curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 2); curl_exec($curl); curl_close($curl); How does this work?To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate. Use valid certificatesIf a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues. Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself. In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:
ResourcesStandards
|
||||||||||||
php:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example$password = "65DBGgwe4uazdWQA"; // Sensitive $httpUrl = "https://example.domain?user=user&password=65DBGgwe4uazdWQA" // Sensitive $sshUrl = "ssh://user:65DBGgwe4uazdWQA@example.domain" // Sensitive Compliant Solution$user = getUser(); $password = getPassword(); // Compliant $httpUrl = "https://example.domain?user=$user&password=$password" // Compliant $sshUrl = "ssh://$user:$password@example.domain" // Compliant See
|
||||||||||||
php:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code Example$url = "http://example.com"; // Sensitive $url = "ftp://anonymous@example.com"; // Sensitive $url = "telnet://anonymous@example.com"; // Sensitive $con = ftp_connect('example.com'); // Sensitive $trans = (new Swift_SmtpTransport('XXX', 1234)); // Sensitive $mailer = new PHPMailer(true); // Sensitive define( 'FORCE_SSL_ADMIN', false); // Sensitive define( 'FORCE_SSL_LOGIN', false); // Sensitive Compliant Solution$url = "https://example.com"; $url = "sftp://anonymous@example.com"; $url = "ssh://anonymous@example.com"; $con = ftp_ssl_connect('example.com'); $trans = (new Swift_SmtpTransport('smtp.example.org', 1234)) ->setEncryption('tls') ; $mailer = new PHPMailer(true); $mailer->SMTPSecure = 'tls'; define( 'FORCE_SSL_ADMIN', true); define( 'FORCE_SSL_LOGIN', true); ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
php:S5693 |
Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to customize the rule with the limit values that correspond to the web application. Sensitive Code ExampleFor Symfony Constraints: use Symfony\Component\Validator\Constraints as Assert; use Symfony\Component\Validator\Mapping\ClassMetadata; class TestEntity { public static function loadValidatorMetadata(ClassMetadata $metadata) { $metadata->addPropertyConstraint('upload', new Assert\File([ 'maxSize' => '100M', // Sensitive ])); } } For Laravel Validator: use App\Http\Controllers\Controller; use Illuminate\Http\Request; class TestController extends Controller { public function test(Request $request) { $validatedData = $request->validate([ 'upload' => 'required|file', // Sensitive ]); } } Compliant SolutionFor Symfony Constraints: use Symfony\Component\Validator\Constraints as Assert; use Symfony\Component\Validator\Mapping\ClassMetadata; class TestEntity { public static function loadValidatorMetadata(ClassMetadata $metadata) { $metadata->addPropertyConstraint('upload', new Assert\File([ 'maxSize' => '8M', // Compliant ])); } } For Laravel Validator: use App\Http\Controllers\Controller; use Illuminate\Http\Request; class TestController extends Controller { public function test(Request $request) { $validatedData = $request->validate([ 'upload' => 'required|file|max:8000', // Compliant ]); } } See
|
||||||||||||
php:S6437 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. Application’s security downgradeA downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component. For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesThe following code example is noncompliant because it uses a hardcoded secret value. Noncompliant code exampleuse Defuse\Crypto\KeyOrPassword; function createKey() { $password = "3xAmpl3"; // Noncompliant return KeyOrPassword::createFromPassword($password); } Compliant solutionuse Defuse\Crypto\KeyOrPassword; function createKey() { $password = $_ENV["SECRET"] return KeyOrPassword::createFromPassword($password); } How does this work?While the noncompliant code example contains a hard-coded password, the compliant solution retrieves the secret’s value from its environment. This allows to have an environment-dependent secret value and avoids storing the password in the source code itself. Depending on the application and its underlying infrastructure, how the secret gets added to the environment might change. ResourcesDocumentation
Standards |
||||||||||||
php:S2070 |
This rule is deprecated; use S4790 instead. Why is this an issue?The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160. Consider using safer alternatives, such as SHA-256, SHA-512 or SHA-3. Noncompliant code example$password = ... if (md5($password) === '1f3870be274f6c49b3e31a0c6728957f') { // Noncompliant; md5() hashing algorithm is not secure for password management [...] } if (sha1($password) === 'd0be2dc421be4fcd0172e5afceea3970e2f3d940') { // Noncompliant; sha1() hashing algorithm is not secure for password management [...] } Resources
|
||||||||||||
php:S2077 |
Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example$id = $_GET['id']; mysql_connect('localhost', $username, $password) or die('Could not connect: ' . mysql_error()); mysql_select_db('myDatabase') or die('Could not select database'); $result = mysql_query("SELECT * FROM myTable WHERE id = " . $id); // Sensitive, could be susceptible to SQL injection while ($row = mysql_fetch_object($result)) { echo $row->name; } Compliant Solution$id = $_GET['id']; try { $conn = new PDO('mysql:host=localhost;dbname=myDatabase', $username, $password); $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $stmt = $conn->prepare('SELECT * FROM myTable WHERE id = :id'); $stmt->execute(array('id' => $id)); while($row = $stmt->fetch(PDO::FETCH_OBJ)) { echo $row->name; } } catch(PDOException $e) { echo 'ERROR: ' . $e->getMessage(); } ExceptionsNo issue will be raised if one of the functions is called with hard-coded string (no concatenation) and this string does not contain a "$" sign. $result = mysql_query("SELECT * FROM myTable WHERE id = 42") or die('Query failed: ' . mysql_error()); // Compliant The current implementation does not follow variables. It will only detect SQL queries which are concatenated or contain a $query = "SELECT * FROM myTable WHERE id = " . $id; $result = mysql_query($query); // No issue will be raised even if it is Sensitive See
|
||||||||||||
php:S2755 |
This vulnerability allows the usage of external entities in XML. Why is this an issue?External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack. What is the potential impact?Exposing sensitive dataOne significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information. Exhausting system resourcesAnother consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience. Forging requestsXXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure. How to fix it in Core PHPCode examplesThe following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed. Noncompliant code example$xml = file_get_contents('xxe.xml'); $doc = simplexml_load_string($xml, 'SimpleXMLElement', LIBXML_NOENT); // Noncompliant $doc = new DOMDocument(); $doc->load('xxe.xml', LIBXML_NOENT); // Noncompliant $reader = new XMLReader(); $reader->open('xxe.xml'); $reader->setParserProperty(XMLReader::SUBST_ENTITIES, true); // Noncompliant Compliant solutionExternal entity substitution is disabled by default in $xml = file_get_contents('xxe.xml'); $doc = simplexml_load_string($xml, 'SimpleXMLElement'); $doc = new DOMDocument(); $doc->load('xxe.xml'); $reader = new XMLReader(); $reader->open('xxe.xml'); $reader->setParserProperty(XMLReader::SUBST_ENTITIES, false); How does this work?Disable external entitiesThe most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework. If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved
during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are
processed. ResourcesStandards |
||||||||||||
php:S4818 |
This rule is deprecated, and will eventually be removed. Using sockets is security-sensitive. It has led in the past to the following vulnerabilities: Sockets are vulnerable in multiple ways:
This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplefunction handle_sockets($domain, $type, $protocol, $port, $backlog, $addr, $hostname, $local_socket, $remote_socket, $fd) { socket_create($domain, $type, $protocol); // Sensitive socket_create_listen($port, $backlog); // Sensitive socket_addrinfo_bind($addr); // Sensitive socket_addrinfo_connect($addr); // Sensitive socket_create_pair($domain, $type, $protocol, $fd); fsockopen($hostname); // Sensitive pfsockopen($hostname); // Sensitive stream_socket_server($local_socket); // Sensitive stream_socket_client($remote_socket); // Sensitive stream_socket_pair($domain, $type, $protocol); // Sensitive } See |
||||||||||||
php:S2964 |
This rule is deprecated, and will eventually be removed. Why is this an issue?
Noncompliant code exampleif (is_bad_ip($requester)) { sleep(5); // Noncompliant } Resources |
||||||||||||
php:S5328 |
If a session ID can be guessed (not generated with a secure pseudo random generator, or with insufficient length …) an attacker may be able to hijack another user’s session. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDon’t manually generate session IDs, use instead language based native functionality. Sensitive Code Examplesession_id(bin2hex(random_bytes(4))); // Sensitive: 4 bytes is too short session_id($_POST["session_id"]); // Sensitive: session ID can be specified by the user Compliant Solutionsession_regenerate_id(); ; // Compliant session_id(bin2hex(random_bytes(16))); // Compliant See
|
||||||||||||
php:S1523 |
Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities: Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security. This rule marks for review each occurrence of the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesRegarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser). Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way. Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer. Sensitive Code Exampleeval($code_to_be_dynamically_executed) See |
||||||||||||
php:S2053 |
This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes. Why is this an issue?During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords. However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital. What is the potential impact?Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need. Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster. If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once. A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before. With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred. ExceptionsTo securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:
When they are used for password storage, using a secure, random salt is required. However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted. How to fix it in Core PHPCode examplesThe following code contains examples of hard-coded salts. Noncompliant code example$salt = 'salty'; $hash = hash_pbkdf2('sha256', $password, $salt, 100000); // Noncompliant Compliant solution$salt = random_bytes(16); $hash = hash_pbkdf2('sha256', $password, $salt, 100000); How does this work?This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards. Here, the compliant code example ensures the salt is random and has a sufficient length by calling the ResourcesStandards |
||||||||||||
php:S2612 |
In Unix file system permissions, the " Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesThe most restrictive possible permissions should be assigned to files and directories. Sensitive Code Examplechmod("foo", 0777); // Sensitive umask(0); // Sensitive umask(0750); // Sensitive For Symfony Filesystem: use Symfony\Component\Filesystem\Filesystem; $fs = new Filesystem(); $fs->chmod("foo", 0777); // Sensitive For Laravel Filesystem: use Illuminate\Filesystem\Filesystem; $fs = new Filesystem(); $fs->chmod("foo", 0777); // Sensitive Compliant Solutionchmod("foo", 0750); // Compliant umask(0027); // Compliant For Symfony Filesystem: use Symfony\Component\Filesystem\Filesystem; $fs = new Filesystem(); $fs->chmod("foo", 0750); // Compliant For Laravel Filesystem: use Illuminate\Filesystem\Filesystem; $fs = new Filesystem(); $fs->chmod("foo", 0750); // Compliant See
|
||||||||||||
php:S6345 |
External requests initiated by a WordPress server should be considered as security-sensitive. They may contain sensitive data which is stored in the files or in the database of the server. It’s important for the administrator of a WordPress server to understand what they contain and to which server they are sent. WordPress makes it possible to block external requests by setting the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampledefine( 'WP_HTTP_BLOCK_EXTERNAL', false ); // Sensitive Compliant Solutiondefine( 'WP_HTTP_BLOCK_EXTERNAL', true ); define( 'WP_ACCESSIBLE_HOSTS', 'api.wordpress.org' ); See
|
||||||||||||
php:S6348 |
By default, the WordPress administrator and editor roles can add unfiltered HTML content in various places, such as post content. This includes the capability to add JavaScript code. If an account with such a role gets hijacked, this capability can be used to plant malicious JavaScript code that gets executed whenever somebody visits the website. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesThe Sensitive Code Exampledefine( 'DISALLOW_UNFILTERED_HTML', false ); // sensitive Compliant Solutiondefine( 'DISALLOW_UNFILTERED_HTML', true ); See
|
||||||||||||
php:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code Example$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP); socket_connect($socket, '8.8.8.8', 23); // Sensitive Compliant Solution$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP); socket_connect($socket, IP_ADDRESS, 23); // Compliant ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See |
||||||||||||
php:S6341 |
WordPress makes it possible to edit theme and plugin files directly in the Administration Screens. While it may look like an easy way to customize
a theme or do a quick change, it’s a dangerous feature. When visiting the theme or plugin editor for the first time, WordPress displays a warning to
make it clear that using such a feature may break the web site by mistake. More importantly, users who have access to this feature can trigger the
execution of any PHP code and may therefore take full control of the WordPress instance. This security risk could be exploited by an attacker who
manages to get access to one of the authorized users. Setting the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampledefine( 'DISALLOW_FILE_EDIT', false ); // Sensitive Compliant Solutiondefine( 'DISALLOW_FILE_EDIT', true ); See
|
||||||||||||
php:S6343 |
Automatic updates are a great way of making sure your application gets security updates as soon as they are available. Once a vendor releases a security update, it is crucial to apply it in a timely manner before malicious actors exploit the vulnerability. Relying on manual updates is usually too late, especially if the application is publicly accessible on the internet. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDon’t deactivate automatic updates unless you have a good reason to do so. This way, you’ll be sure to receive security updates as soon as they are available. If you are worried about an automatic update breaking something, check if it is possible to only activate automatic updates for minor or security updates. Sensitive Code Exampledefine( 'WP_AUTO_UPDATE_CORE', false ); // Sensitive define( 'AUTOMATIC_UPDATER_DISABLED', true ); // Sensitive Compliant Solutiondefine( 'WP_AUTO_UPDATE_CORE', true ); // Minor and major automatic updates enabled define( 'WP_AUTO_UPDATE_CORE', 'minor' ); // Only minor updates are enabled define( 'AUTOMATIC_UPDATER_DISABLED', false ); See
|
||||||||||||
php:S6346 |
WordPress has a database repair and optimization mode that can be activated by setting If activated, the repair page can be accessed by any user, authenticated or not. This makes sense because if the database is corrupted, the authentication mechanism might not work. Malicious users could trigger this potentially costly operation repeatadly slowing down the website, and making it unavailable. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to enable automatic database repair mode only in case of database corruption. This feature should be deactivated again when the database issue is resolved. Sensitive Code Exampledefine( 'WP_ALLOW_REPAIR', true ); // Sensitive Compliant Solution// The default value is false, so the value does not have to be expilicitly set. define( 'WP_ALLOW_REPAIR', false ); See
|
||||||||||||
php:S4823 |
This rule is deprecated, and will eventually be removed. Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities: Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized. Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure. This rule raises an issue when on every program entry points ( Ask Yourself Whether
If you answered yes to any of these questions you are at risk. Recommended Secure Coding PracticesSanitize all command line arguments before using them. Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information. Sensitive Code ExampleBuiltin access to function globfunc() { global $argv; // Sensitive. Reference to global $argv foreach ($argv as $arg) { // Sensitive. // ... } } function myfunc($argv) { $param = $argv[0]; // OK. Reference to local $argv parameter // ... } foreach ($argv as $arg) { // Sensitive. Reference to $argv. // ... } $myargv = $_SERVER['argv']; // Sensitive. Equivalent to $argv. function serve() { $myargv = $_SERVER['argv']; // Sensitive. // ... } myfunc($argv); // Sensitive $myvar = $HTTP_SERVER_VARS[0]; // Sensitive. Note: HTTP_SERVER_VARS has ben removed since PHP 5.4. $options = getopt('a:b:'); // Sensitive. Parsing arguments. $GLOBALS["argv"]; // Sensitive. Equivalent to $argv. function myglobals() { $GLOBALS["argv"]; // Sensitive } $argv = [1,2,3]; // Sensitive. It is a bad idea to override argv. Zend Console new Zend\Console\Getopt(['myopt|m' => 'this is an option']); // Sensitive Getopt-php library new \GetOpt\Option('m', 'myoption', \GetOpt\GetOpt::REQUIRED_ARGUMENT); // Sensitive See |
||||||||||||
php:S4828 |
Signaling processes or process groups can seriously affect the stability of this application or other applications on the same system. Accidentally setting an incorrect Also, the system treats the signal differently if the destination Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example$targetPid = (int)$_GET["pid"]; posix_kill($targetPid, 9); // Sensitive Compliant Solution$targetPid = (int)$_GET["pid"]; // Validate the untrusted PID, // With a pre-approved list or authorization checks if (isValidPid($targetPid)) { posix_kill($targetPid, 9); } See |
||||||||||||
php:S4829 |
This rule is deprecated, and will eventually be removed. Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities: It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated. This rule flags code that reads from the standard input. Ask Yourself Whether
You are at risk if you answered yes to this question. Recommended Secure Coding PracticesSanitize all data read from the standard input before using it. Sensitive Code Example// Any reference to STDIN is Sensitive $varstdin = STDIN; // Sensitive stream_get_line(STDIN, 40); // Sensitive stream_copy_to_stream(STDIN, STDOUT); // Sensitive // ... // Except those references as they can't create an injection vulnerability. ftruncate(STDIN, 5); // OK ftell(STDIN); // OK feof(STDIN); // OK fseek(STDIN, 5); // OK fclose(STDIN); // OK // STDIN can also be referenced like this $mystdin = 'php://stdin'; // Sensitive file_get_contents('php://stdin'); // Sensitive readfile('php://stdin'); // Sensitive $input = fopen('php://stdin', 'r'); // Sensitive fclose($input); // OK See |
||||||||||||
php:S4830 |
This vulnerability makes it possible that an encrypted communication is intercepted. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be. When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. Loss of data integrityWhen TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system. How to fix it in cURLCode examplesThe following code contains examples of disabled certificate validation. The certificate validation gets disabled by setting Noncompliant code example$curl = curl_init(); curl_setopt($curl, CURLOPT_URL, 'https://example.com/'); curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false); // Noncompliant curl_exec($curl); curl_close($curl); Compliant solution$curl = curl_init(); curl_setopt($curl, CURLOPT_URL, 'https://example.com/'); curl_exec($curl); curl_close($curl); How does this work?Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation. To avoid running into problems with invalid certificates, consider the following sections. Using trusted certificatesIf possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration. Working with self-signed certificates or non-standard CAsIn some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store. ResourcesStandards
|
||||||||||||
php:S6339 |
Why is this an issue?Secret keys are used in combination with an algorithm to encrypt data. A typical use case is an authentication system. For such a system to be secure, the secret key should have a value which cannot be guessed and which is long enough to not be vulnerable to brute-force attacks. A "salt" is an extra piece of data which is included when hashing data such as a password. Its value should have the same properties as a secret key. This rule raises an issue when it detects that a secret key or a salt has a predictable value or that it’s not long enough. Noncompliant code exampleWordPress: define('AUTH_KEY', 'hello'); // Noncompliant define('AUTH_SALT', 'hello'); // Noncompliant define('AUTH_KEY', 'put your unique phrase here'); // Noncompliant, this is the default value Compliant solutionWordPress: define('AUTH_KEY', 'D&ovlU#|CvJ##uNq}bel+^MFtT&.b9{UvR]g%ixsXhGlRJ7q!h}XWdEC[BOKXssj'); define('AUTH_SALT', 'FIsAsXJKL5ZlQo)iD-pt??eUbdc{_Cn<4!d~yqz))&B D?AwK%)+)F2aNwI|siOe'); Resources
|
||||||||||||
php:S2092 |
When a cookie is protected with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleIn php.ini you can specify the flags for the session cookie which is security-sensitive: session.cookie_secure = 0; // Sensitive: this security-sensitive session cookie is created with the secure flag set to false (cookie_secure = 0) Same thing in PHP code: session_set_cookie_params($lifetime, $path, $domain, false); // Sensitive: this security-sensitive session cookie is created with the secure flag (the fourth argument) set to _false_ If you create a custom security-sensitive cookie in your PHP code: $value = "sensitive data"; setcookie($name, $value, $expire, $path, $domain, false); // Sensitive: a security-sensitive cookie is created with the secure flag (the sixth argument) set to _false_ By default $value = "sensitive data"; setcookie($name, $value, $expire, $path, $domain); // Sensitive: a security-sensitive cookie is created with the secure flag (the sixth argument) not defined (by default to false) setrawcookie($name, $value, $expire, $path, $domain); // Sensitive: a security-sensitive cookie is created with the secure flag (the sixth argument) not defined (by default to false) Compliant Solutionsession.cookie_secure = 1; // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to cookie_secure property set to 1 session_set_cookie_params($lifetime, $path, $domain, true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the fouth argument) set to true $value = "sensitive data"; setcookie($name, $value, $expire, $path, $domain, true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the sixth argument) set to true setrawcookie($name, $value, $expire, $path, $domain, true);// Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the sixth argument) set to true See
|
||||||||||||
php:S4834 |
This rule is deprecated, and will eventually be removed. The access control of an application must be properly implemented in order to restrict access to resources to authorized entities otherwise this could lead to vulnerabilities: Granting correct permissions to users, applications, groups or roles and defining required permissions that allow access to a resource is sensitive, must therefore be done with care. For instance, it is obvious that only users with administrator privilege should be authorized to add/remove the administrator permission of another user. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesAt minimum, an access control system should:
Sensitive Code ExampleCakePHP use Cake\Auth\BaseAuthorize; use Cake\Controller\Controller; abstract class MyAuthorize extends BaseAuthorize { // Sensitive. Method extending Cake\Auth\BaseAuthorize. // ... } // Note that "isAuthorized" methods will only be detected in direct subclasses of Cake\Controller\Controller. abstract class MyController extends Controller { public function isAuthorized($user) { // Sensitive. Method called isAuthorized in a Cake\Controller\Controller. return false; } } See |
||||||||||||
php:S5122 |
Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities: Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExamplePHP built-in header function: header("Access-Control-Allow-Origin: *"); // Sensitive Laravel: response()->header('Access-Control-Allow-Origin', "*"); // Sensitive Symfony: use Symfony\Component\HttpFoundation\Response; $response = new Response( 'Content', Response::HTTP_OK, ['Access-Control-Allow-Origin' => '*'] // Sensitive ); $response->headers->set('Access-Control-Allow-Origin', '*'); // Sensitive User-controlled origin: use Symfony\Component\HttpFoundation\Response; use Symfony\Component\HttpFoundation\Request; $origin = $request->headers->get('Origin'); $response->headers->set('Access-Control-Allow-Origin', $origin); // Sensitive Compliant SolutionPHP built-in header function: header("Access-Control-Allow-Origin: $trusteddomain"); Laravel: response()->header('Access-Control-Allow-Origin', $trusteddomain); Symfony: use Symfony\Component\HttpFoundation\Response; $response = new Response( 'Content', Response::HTTP_OK, ['Access-Control-Allow-Origin' => $trusteddomain] ); $response->headers->set('Access-Control-Allow-Origin', $trusteddomain); User-controlled origin validated with an allow-list: use Symfony\Component\HttpFoundation\Response; use Symfony\Component\HttpFoundation\Request; $origin = $request->headers->get('Origin'); if (in_array($origin, $trustedOrigins)) { $response->headers->set('Access-Control-Allow-Origin', $origin); } See
|
||||||||||||
php:S5808 |
When granting users access to resources of an application, such an authorization should be based on strong decisions. For instance, a user may be authorized to access a resource only if they are authenticated, or if they have the correct role and privileges. Why is this an issue?Access control is a critical aspect of web frameworks that ensures proper authorization and restricts access to sensitive resources or actions. To enable access control, web frameworks offer components that are responsible for evaluating user permissions and making access control decisions. They might examine the user’s credentials, such as roles or privileges, and compare them against predefined rules or policies to determine whether the user should be granted access to a specific resource or action. Conventionally, these checks should never grant access to every request received. If an endpoint or component is meant to be public, then it should be ignored by access control components. Conversely, if an endpoint should deny some users from accessing it, then access control has to be configured correctly for this endpoint. Granting unrestricted access to all users can lead to security vulnerabilities and potential misuse of critical functionalities. It is important to carefully assess access decisions based on factors such as user roles, resource sensitivity, and business requirements. Implementing a robust and granular access control mechanism is crucial for the security and integrity of the web application itself and its surrounding environment. What is the potential impact?Not verifying user access strictly can introduce significant security risks. Some of the most prominent risks are listed below. Depending on the use case, it is very likely that other risks are introduced on top of the ones listed. Unauthorized accessAs the access of users is not checked strictly, it becomes very easy for an attacker to gain access to restricted areas or functionalities, potentially compromising the confidentiality, integrity, and availability of sensitive resources. They may exploit this access to perform malicious actions, such as modifying or deleting data, impersonating legitimate users, or gaining administrative privileges, ultimately compromising the security of the system. Theft of sensitive dataTheft of sensitive data can result from incorrect access control if attackers manage to gain access to databases, file systems, or other storage mechanisms where sensitive data is stored. This can lead to the theft of personally identifiable information (PII), financial data, intellectual property, or other confidential information. The stolen data can be used for various malicious purposes, such as identity theft, financial fraud, or selling the data on the black market, causing significant harm to individuals and organizations affected by the breach. How to fix it in SymfonyCode examplesNoncompliant code exampleThe class NoncompliantVoter implements VoterInterface { public function vote(TokenInterface $token, $subject, array $attributes) { return self::ACCESS_GRANTED; // Noncompliant } } The class NoncompliantVoter extends Voter { protected function supports(string $attribute, $subject) { return true; } protected function voteOnAttribute(string $attribute, $subject, TokenInterface $token) { return true; // Noncompliant } } Compliant solutionThe class CompliantVoter implements VoterInterface { public function vote(TokenInterface $token, $subject, array $attributes) { if (foo()) { return self::ACCESS_GRANTED; } else if (bar()) { return self::ACCESS_ABSTAIN; } return self::ACCESS_DENIED; } } The class CompliantVoter extends Voter { protected function supports(string $attribute, $subject) { return true; } protected function voteOnAttribute(string $attribute, $subject, TokenInterface $token) { if (foo()) { return true; } return false; } } ResourcesStandards |
||||||||||||
Web:S5148 |
A newly opened window having access back to the originating window could allow basic phishing attacks (the For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesUse Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ Sensitive Code Example<a href="http://example.com/dangerous" target="_blank"> <!-- Sensitive --> <a href="{{variable}}" target="_blank"> <!-- Sensitive --> Compliant SolutionTo prevent pages from abusing <a href="http://petssocialnetwork.io" target="_blank" rel="noopener"> ExceptionsNo Issue will be raised when <a href="internal.html" target="_blank" > See |
||||||||||||
Web:S5247 |
To reduce the risk of cross-site scripting attacks, templating systems, such as Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy
(which only transforms html characters into html entities) will not be relevant
when variables are used in a html attribute because ' <a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie) <a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack) Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesEnable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one. Sensitive Code Example<!-- Django templates --> <p>{{ variable|safe }}</p><!-- Sensitive --> {% autoescape off %}<!-- Sensitive --> <!-- Jinja2 templates --> <p>{{ variable|safe }}</p><!-- Sensitive --> {% autoescape false %}<!-- Sensitive --> See
|
||||||||||||
Web:S5725 |
Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application. On the client side, where front-end code is executed, malicious code could:
Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:
By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes
applied to it before it is downloaded. Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesTo check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed. In this case, the artifact’s hash must:
To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings. Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes. Sensitive Code ExampleThe following code sample uses neither integrity checks nor version pinning: <script src="https://cdn.example.com/latest/script.js" ></script> <!-- Sensitive --> Compliant Solution<script src="https://cdn.example.com/v5.3.6/script.js" integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC" ></script> See |
||||||||||||
ruby:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code Exampleip = "192.168.12.42"; // Sensitive Compliant Solutionip = IP_ADDRESS; // Compliant ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
ruby:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
See
|
||||||||||||
javascript:S5732 |
Clickjacking attacks occur when an attacker try to trick an user to click on certain buttons/links of a legit website. This attack can take place with malicious HTML frames well hidden in an attacker website. For instance, suppose a safe and authentic page of a social network (https://socialnetworkexample.com/makemyprofilpublic) which allows an user to change the visibility of his profile by clicking on a button. This is a critical feature with high privacy concerns. Users are generally well informed on the social network of the consequences of this action. An attacker can trick users, without their consent, to do this action with the below embedded code added on a malicious website: <html> <b>Click on the button below to win 5000$</b> <br> <iframe src="https://socialnetworkexample.com/makemyprofilpublic" width="200" height="200"></iframe> </html> Playing with the size of the iframe it’s sometimes possible to display only the critical parts of a page, in this case the button of the makemyprofilpublic page. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement content security policy frame-ancestors directive which is supported by all modern browsers and will specify the origins of frame allowed to be loaded by the browser (this directive deprecates X-Frame-Options). Sensitive Code ExampleIn Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.contentSecurityPolicy({ directives: { // other directives frameAncestors: ["'none'"] // Sensitive: frameAncestors is set to none } }) ); Compliant SolutionIn Express.js application a standard way to implement CSP frame-ancestors directive is the helmet-csp or helmet middleware: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.contentSecurityPolicy({ directives: { // other directives frameAncestors: ["'example.com'"] // Compliant } }) ); See
|
||||||||||||
javascript:S5734 |
MIME confusion attacks occur when an attacker successfully tricks a web-browser to interpret a resource as a different type than the one expected. To correctly interpret a resource (script, image, stylesheet …) web browsers look for the Content-Type header defined in the HTTP response received from the server, but often this header is not set or is set with an incorrect value. To avoid content-type mismatch and to provide the best user experience, web browsers try to deduce the right content-type, generally by inspecting the content of the resources (the first bytes). This "guess mechanism" is called MIME type sniffing. Attackers can take advantage of this feature when a website ("example.com" here) allows to upload arbitrary files. In that case, an attacker can upload a malicious image fakeimage.png (containing malicious JavaScript code or a polyglot content file) such as: <script>alert(document.cookie)</script> When the victim will visit the website showing the uploaded image, the malicious script embedded into the image will be executed by web browsers performing MIME type sniffing. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesImplement X-Content-Type-Options header with nosniff value (the only existing value for this header) which is supported by all modern browsers and will prevent browsers from performing MIME type sniffing, so that in case of Content-Type header mismatch, the resource is not interpreted. For example within a <script> object context, JavaScript MIME types are expected (like application/javascript) in the Content-Type header. Sensitive Code ExampleIn Express.js application the code is sensitive if, when using helmet, the const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet({ noSniff: false, // Sensitive }) ); Compliant SolutionWhen using const express = require('express'); const helmet= require('helmet'); let app = express(); app.use(helmet.noSniff()); See
|
||||||||||||
javascript:S5730 |
A mixed-content is when a resource is loaded with the HTTP protocol, from a website accessed with the HTTPs protocol, thus mixed-content are not encrypted and exposed to MITM attacks and could break the entire level of protection that was desired by implementing encryption with the HTTPs protocol. The main threat with mixed-content is not only the confidentiality of resources but the whole website integrity:
Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement content security policy block-all-mixed-content directive which is supported by all modern browsers and will block loading of mixed-contents. Sensitive Code ExampleIn Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.contentSecurityPolicy({ directives: { "default-src": ["'self'", 'example.com', 'code.jquery.com'] } // Sensitive: blockAllMixedContent directive is missing }) ); Compliant SolutionIn Express.js application a standard way to block mixed-content is to put in place the helmet-csp or helmet middleware with the
const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.contentSecurityPolicy({ directives: { "default-src": ["'self'", 'example.com', 'code.jquery.com'], blockAllMixedContent: [] // Compliant } }) ); See
|
||||||||||||
javascript:S5736 |
HTTP header referer contains a URL set by web browsers and used by applications to track from where the user came from, it’s for instance a relevant value for web analytic services, but it can cause serious privacy and security problems if the URL contains confidential information. Note that Firefox for instance, to prevent data leaks, removes path information in the Referer header while browsing privately. Suppose an e-commerce website asks the user his credit card number to purchase a product: <html> <body> <form action="/valid_order" method="GET"> Type your credit card number to purchase products: <input type=text id="cc" value="1111-2222-3333-4444"> <input type=submit> </form> </body> When submitting the above HTML form, a HTTP GET request will be performed, the URL requested will be https://example.com/valid_order?cc=1111-2222-3333-4444 with credit card number inside and it’s obviously not secure for these reasons:
In addition to these threats, when further requests will be performed from the "valid_order" page with a simple legitimate embedded script like that: <script src="https://webanalyticservices_example.com/track"> The referer header which contains confidential information will be send to a third party web analytic service and cause privacy issue: GET /track HTTP/2.0 Host: webanalyticservices_example.com Referer: https://example.com/valid_order?cc=1111-2222-3333-4444 Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesConfidential information should not be set inside URLs (GET requests) of the application and a safe (ie: different from Sensitive Code ExampleIn Express.js application the code is sensitive if the helmet const express = require('express'); const helmet = require('helmet'); app.use( helmet.referrerPolicy({ policy: 'no-referrer-when-downgrade' // Sensitive: no-referrer-when-downgrade is used }) ); Compliant SolutionIn Express.js application a secure solution is to user the helmet referrer policy middleware set
to const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.referrerPolicy({ policy: 'no-referrer' // Compliant }) ); See
|
||||||||||||
javascript:S5739 |
When implementing the HTTPS protocol, the website mostly continue to support the HTTP protocol to redirect users to HTTPS when they request a HTTP version of the website. These redirects are not encrypted and are therefore vulnerable to man in the middle attacks. The Strict-Transport-Security policy header (HSTS) set by an application instructs the web browser to convert any HTTP request to HTTPS. Web browsers that see the Strict-Transport-Security policy header for the first time record information specified in the header:
With the Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement Strict-Transport-Security policy header, it is recommended to apply this policy to all subdomains ( Sensitive Code ExampleIn Express.js application the code is sensitive if the helmet or hsts middleware are disabled or used without recommended values: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use(helmet.hsts({ maxAge: 3153600, // Sensitive, recommended >= 15552000 includeSubDomains: false // Sensitive, recommended 'true' })); Compliant SolutionIn Express.js application a standard way to implement HSTS is with the helmet or hsts middleware: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use(helmet.hsts({ maxAge: 31536000, includeSubDomains: true })); // Compliant See
|
||||||||||||
javascript:S5743 |
This rule is deprecated, and will eventually be removed. By default, web browsers perform DNS prefetching to reduce latency due to DNS resolutions required when an user clicks links from a website page. For instance on example.com the hyperlink below contains a cross-origin domain name that must be resolved to an IP address by the web browser: <a href="https://otherexample.com">go on our partner website</a> It can add significant latency during requests, especially if the page contains many links to cross-origin domains. DNS prefetch allows web browsers to perform DNS resolving in the background before the user clicks a link. This feature can cause privacy issues because DNS resolving from the user’s computer is performed without his consent if he doesn’t intent to go to the linked website. On a complex private webpage, a combination "of unique links/DNS resolutions" can indicate, to a eavesdropper for instance, that the user is visiting the private page. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement X-DNS-Prefetch-Control header with an off value but this could significantly degrade website performances. Sensitive Code ExampleIn Express.js application the code is sensitive if the dns-prefetch-control middleware is disabled or used without the recommended value: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.dnsPrefetchControl({ allow: true // Sensitive: allowing DNS prefetching is security-sensitive }) ); Compliant SolutionIn Express.js application the dns-prefetch-control or helmet middleware is the standard way to implement const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.dnsPrefetchControl({ allow: false // Compliant }) ); See
|
||||||||||||
javascript:S5852 |
Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars). This rule determines the runtime complexity of a regular expression and informs you if it is not linear. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesTo avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression. In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen.
In order to rewrite your regular expression without these patterns, consider the following strategies:
Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when the regex is not anchored to the beginning of the string, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:
Sensitive Code ExampleThe regex evaluation will never end: /(a+)+$/.test( "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaa!" ); // Sensitive Compliant SolutionPossessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues. Unfortunately, they are not supported in JavaScript, but one can still mimick them using lookahead assertions and backreferences: /((?=(a+))\2)+$/.test( "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaa!" ); // Compliant See
|
||||||||||||
javascript:S2598 |
Why is this an issue?If the file upload feature is implemented without proper folder restriction, it will result in an implicit trust violation within the server, as trusted files will be implicitly stored alongside third-party files that should be considered untrusted. This can allow an attacker to disrupt the security of an internal server process or the running application. What is the potential impact?After discovering this vulnerability, attackers may attempt to upload as many different file types as possible, such as javascript files, bash scripts, malware, or malicious configuration files targeting potential processes. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Full application compromiseIn the worst-case scenario, the attackers succeed in uploading a file recognized by in an internal tool, triggering code execution. Depending on the attacker, code execution can be used with different intentions:
Server Resource ExhaustionBy repeatedly uploading large files, an attacker can consume excessive server resources, resulting in a denial of service. If the component affected by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service can only affect the attacker who caused it. Even though a denial of service might have little direct impact, it can have secondary impact in architectures that use containers and container orchestrators. For example, it can cause unexpected container failures or overuse of resources. In some cases, it is also possible to force the product to "fail open" when resources are exhausted, which means that some security features are disabled in an emergency. These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP). How to fix it in FormidableCode examplesNoncompliant code exampleconst Formidable = require('formidable'); const form = new Formidable(); // Noncompliant form.uploadDir = "/tmp/"; form.keepExtensions = true; Compliant solutionconst Formidable = require('formidable'); const form = new Formidable(); form.uploadDir = "/uploads/"; form.keepExtensions = false; How does this work?Use pre-approved foldersCreate a special folder where untrusted data should be stored. This folder should be classified as untrusted and have the following characteristics:
This folder should not be located in Also, the original file names and extensions should be changed to controlled strings to prevent unwanted code from being executed based on the file names. Resources
|
||||||||||||
javascript:S5742 |
Certificate Transparency (CT) is an open-framework to protect against identity theft when certificates are issued. Certificate Authorities (CA) electronically sign certificate after verifying the identify of the certificate owner. Attackers use, among other things, social engineering attacks to trick a CA to correctly verifying a spoofed identity/forged certificate. CAs implement Certificate Transparency framework to publicly log the records of newly issued certificates, allowing the public and in particular the identity owner to monitor these logs to verify that his identify was not usurped. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement Expect-CT HTTP header which instructs the web browser to check public CT logs in order to verify if the website appears inside and if it is not, the browser will block the request and display a warning to the user. Sensitive Code ExampleIn Express.js application the code is sensitive if the expect-ct middleware is disabled: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet({ expectCt: false // Sensitive }) ); Compliant SolutionIn Express.js application the expect-ct middleware is the standard way to implement
expect-ct. Usually, the deployment of this policy starts with the report only mode ( const express = require('express'); const helmet = require('helmet'); let app = express(); app.use(helmet.expectCt({ enforce: true, maxAge: 86400 })); // Compliant See
|
||||||||||||
javascript:S4502 |
A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application. The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleExpress.js CSURF middleware protection is not found on an unsafe HTTP method like POST method: let csrf = require('csurf'); let express = require('express'); let csrfProtection = csrf({ cookie: true }); let app = express(); // Sensitive: this operation doesn't look like protected by CSURF middleware (csrfProtection is not used) app.post('/money_transfer', parseForm, function (req, res) { res.send('Money transferred'); }); Protection provided by Express.js CSURF middleware is globally disabled on unsafe methods: let csrf = require('csurf'); let express = require('express'); app.use(csrf({ cookie: true, ignoreMethods: ["POST", "GET"] })); // Sensitive as POST is unsafe method Compliant SolutionExpress.js CSURF middleware protection is used on unsafe methods: let csrf = require('csurf'); let express = require('express'); let csrfProtection = csrf({ cookie: true }); let app = express(); app.post('/money_transfer', parseForm, csrfProtection, function (req, res) { // Compliant res.send('Money transferred') }); Protection provided by Express.js CSURF middleware is enabled on unsafe methods: let csrf = require('csurf'); let express = require('express'); app.use(csrf({ cookie: true, ignoreMethods: ["GET"] })); // Compliant See |
||||||||||||
javascript:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on production servers or applications distributed to end users. Sensitive Code Exampleerrorhandler Express.js middleware should not be used in production: const express = require('express'); const errorhandler = require('errorhandler'); let app = express(); app.use(errorhandler()); // Sensitive Compliant Solutionerrorhandler Express.js middleware used only in development mode: const express = require('express'); const errorhandler = require('errorhandler'); let app = express(); if (process.env.NODE_ENV === 'development') { app.use(errorhandler()); } See |
||||||||||||
javascript:S5604 |
Powerful features are browser features (geolocation, camera, microphone …) that can be accessed with JavaScript API and may require a permission granted by the user. These features can have a high impact on privacy and user security thus they should only be used if they are really necessary to implement the critical parts of an application. This rule highlights intrusive permissions when requested with the future standard (but currently experimental) web browser query API and specific APIs related to the permission. It is highly recommended to customize this rule with the permissions considered as intrusive in the context of the web application. Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleWhen using geolocation API, Firefox for example retrieves personal information like nearby wireless access points and IP address and sends it to the default geolocation service provider, Google Location Services: navigator.permissions.query({name:"geolocation"}).then(function(result) { }); // Sensitive: geolocation is a powerful feature with high privacy concerns navigator.geolocation.getCurrentPosition(function(position) { console.log("coordinates x="+position.coords.latitude+" and y="+position.coords.longitude); }); // Sensitive: geolocation is a powerful feature with high privacy concerns Compliant SolutionIf geolocation is required, always explain to the user why the application needs it and prefer requesting an approximate location when possible: <html> <head> <title> Retailer website example </title> </head> <body> Type a city, street or zip code where you want to retrieve the closest retail locations of our products: <form method=post> <input type=text value="New York"> <!-- Compliant --> </form> </body> </html> See
|
||||||||||||
javascript:S5725 |
Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application. On the client side, where front-end code is executed, malicious code could:
Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:
By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes
applied to it before it is downloaded. Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesTo check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed. In this case, the artifact’s hash must:
To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings. Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes. Sensitive Code ExampleThe following code sample uses neither integrity checks nor version pinning: let script = document.createElement("script"); script.src = "https://cdn.example.com/latest/script.js"; // Sensitive script.crossOrigin = "anonymous"; document.head.appendChild(script); Compliant Solutionlet script = document.createElement("script"); script.src = "https://cdn.example.com/v5.3.6/script.js"; script.integrity = "sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"; script.crossOrigin = "anonymous"; document.head.appendChild(script); See |
||||||||||||
javascript:S5728 |
Content security policy (CSP) (fetch directives) is a W3C standard which is used by a server to specify, via a http header, the origins from where the browser is allowed to load resources. It can help to mitigate the risk of cross site scripting (XSS) attacks and reduce privileges used by an application. If the website doesn’t define CSP header the browser will apply same-origin policy by default. Content-Security-Policy: default-src 'self'; script-src ‘self ‘ http://www.example.com In the above example, all resources are allowed from the website where this header is set and script resources fetched from example.com are also authorized: <img src="selfhostedimage.png></script> <!-- will be loaded because default-src 'self'; directive is applied --> <img src="http://www.example.com/image.png></script> <!-- will NOT be loaded because default-src 'self'; directive is applied --> <script src="http://www.example.com/library.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.comdirective is applied --> <script src="selfhostedscript.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.com directive is applied --> <script src="http://www.otherexample.com/library.js></script> <!-- will NOT be loaded because script-src ‘self ‘ http://www.example.comdirective is applied --> Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement content security policy fetch directives, in particular default-src directive and continue to properly sanitize and validate all inputs of the application, indeed CSP fetch directives is only a tool to reduce the impact of cross site scripting attacks. Sensitive Code ExampleIn a Express.js application, the code is sensitive if the helmet contentSecurityPolicy middleware is disabled: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet({ contentSecurityPolicy: false, // sensitive }) ); Compliant SolutionIn a Express.js application, a standard way to implement CSP is the helmet contentSecurityPolicy middleware: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use(helmet.contentSecurityPolicy()); // Compliant See
|
||||||||||||
javascript:S5542 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext. Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution. For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in Node.jsCode examplesNoncompliant code exampleExample with a symmetric cipher, AES: const crypto = require('crypto'); crypto.createCipheriv("AES-128-CBC", key, iv); // Noncompliant Compliant solutionExample with a symmetric cipher, AES: const crypto = require('crypto'); crypto.createCipheriv("AES-256-GCM", key, iv); How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. Appropriate choices are currently the following. For AES: use authenticated encryption modesThe best-known authenticated encryption mode for AES is Galois/Counter mode (GCM). GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data. Other similar modes are:
It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead. For RSA: use the OAEP schemeThe Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA. ResourcesArticles & blog posts
Standards |
||||||||||||
javascript:S5547 |
This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in Node.jsCode examplesThe following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided. Noncompliant code exampleconst crypto = require('crypto'); crypto.createCipheriv("DES", key, iv); // Noncompliant Compliant solutionconst crypto = require('crypto'); crypto.createCipheriv("AES-256-GCM", key, iv); How does this work?Use a secure algorithmIt is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES). For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits. ResourcesStandards |
||||||||||||
javascript:S5659 |
This vulnerability allows forging of JSON Web Tokens to impersonate other users. Why is this an issue?JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature. What is the potential impact?When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities. Impersonation of usersJWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data. Unauthorized data accessWhen a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access. How to fix it in jsonwebtokenCode examplesThe following code contains examples of JWT encoding and decoding without a strong cipher algorithm. Noncompliant code exampleconst jwt = require('jsonwebtoken'); jwt.sign(payload, key, { algorithm: 'none' }); // Noncompliant const jwt = require('jsonwebtoken'); jwt.verify(token, key, { expiresIn: 360000, algorithms: ['none'] // Noncompliant }, callbackcheck); Compliant solutionconst jwt = require('jsonwebtoken'); jwt.sign(payload, key, { algorithm: 'HS256' }); const jwt = require('jsonwebtoken'); jwt.verify(token, key, { expiresIn: 360000, algorithms: ['HS256'] }, callbackcheck); How does this work?Always sign your tokensThe foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created. Choose a strong cipher algorithmIt is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens. Verify the signature of your tokensResolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose. Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked. To resolve the issue, follow these instructions:
By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process. Going the extra mileSecurely store your secret keysEnsure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services. Rotate your secret keysEven with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions. ResourcesStandards |
||||||||||||
javascript:S2245 |
Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities: When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information. As the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleconst val = Math.random(); // Sensitive // Check if val is used in a security context. Compliant Solution// === Client side === const crypto = window.crypto || window.msCrypto; var array = new Uint32Array(1); crypto.getRandomValues(array); // Compliant for security-sensitive use cases // === Server side === const crypto = require('crypto'); const buf = crypto.randomBytes(1); // Compliant for security-sensitive use cases See
|
||||||||||||
javascript:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Node.jsCode examplesNoncompliant code exampleNodeJs offers multiple ways to set weak TLS protocols. For https and tls, these options are used and are used in other third-party libraries as well. The first is const https = require('node:https'); const tls = require('node:tls'); let options = { secureProtocol: 'TLSv1_method' // Noncompliant }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); The second is the combination of const https = require('node:https'); const tls = require('node:tls'); let options = { minVersion: 'TLSv1.1', // Noncompliant maxVersion: 'TLSv1.2' }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); And const https = require('node:https'); const tls = require('node:tls'); const constants = require('node:crypto'): let options = { secureOptions: constants.SSL_OP_NO_SSLv2 | constants.SSL_OP_NO_SSLv3 | constants.SSL_OP_NO_TLSv1 }; // Noncompliant let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); Compliant solutionconst https = require('node:https'); const tls = require('node:tls'); let options = { secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); const https = require('node:https'); const tls = require('node:tls'); let options = { minVersion: 'TLSv1.2', maxVersion: 'TLSv1.2' }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); Here, the goal is to turn on only TLSv1.2 and higher, by turning off all lower versions: const https = require('node:https'); const tls = require('node:tls'); let options = { secureOptions: constants.SSL_OP_NO_SSLv2 | constants.SSL_OP_NO_SSLv3 | constants.SSL_OP_NO_TLSv1 | constants.SSL_OP_NO_TLSv1_1 }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards
|
||||||||||||
javascript:S4426 |
This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms. Note that depending on the algorithm, the term key refers to a different mathematical property. For example:
If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext. In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Node.jsCode examplesThe following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm. Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm. Noncompliant code exampleHere is an example of a private key generation with RSA: const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', { modulusLength: 1024, // Noncompliant publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Here is an example of a key generation with the Digital Signature Algorithm (DSA): const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', { modulusLength: 1024, // Noncompliant publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name: const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPair('ec', { namedCurve: 'secp112r2', // Noncompliant publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Compliant solutionHere is an example of a private key generation with RSA: const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', { modulusLength: 2048, publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Here is an example of a key generation with the Digital Signature Algorithm (DSA): const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', { modulusLength: 2048, publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name: const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPair('ec', { namedCurve: 'secp224k1', publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community. The appropriate choices are the following. RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem. In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible. AES (Advanced Encryption Standard)AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying
all possible keys. Currently, a minimum key size of 128 bits is recommended for AES. Elliptic Curve Cryptography (ECC)Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve
algorithms is mentioned directly in their names. For example, Currently, a minimum key size of 224 bits is recommended for EC-based algorithms. Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:
Going the extra milePre-Quantum CryptographyEncrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer. Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety. Resources
Articles & blog posts
Standards
|
||||||||||||
javascript:S4787 |
This rule is deprecated; use S4426, S5542, S5547 instead. Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities: Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption. This rule flags function calls that initiate encryption/decryption. Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example// === Client side === crypto.subtle.encrypt(algo, key, plainData); // Sensitive crypto.subtle.decrypt(algo, key, encData); // Sensitive // === Server side === const crypto = require("crypto"); const cipher = crypto.createCipher(algo, key); // Sensitive const cipheriv = crypto.createCipheriv(algo, key, iv); // Sensitive const decipher = crypto.createDecipher(algo, key); // Sensitive const decipheriv = crypto.createDecipheriv(algo, key, iv); // Sensitive const pubEnc = crypto.publicEncrypt(key, buf); // Sensitive const privDec = crypto.privateDecrypt({ key: key, passphrase: secret }, pubEnc); // Sensitive const privEnc = crypto.privateEncrypt({ key: key, passphrase: secret }, buf); // Sensitive const pubDec = crypto.publicDecrypt(key, privEnc); // Sensitive See
|
||||||||||||
javascript:S5876 |
An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled. Why is this an issue?Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:
What is the potential impact?Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following: ImpersonationOnce an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf. Data BreachIf an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes. Privilege EscalationIn some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems. How to fix it in PassportCode examplesUpon user authentication, it is crucial to regenerate the session identifier to prevent fixation attacks. Passport provides a mechanism to achieve
this by using the Noncompliant code exampleapp.post('/login', passport.authenticate('local', { failureRedirect: '/login' }), function(req, res) { // Noncompliant - no session.regenerate after login res.redirect('/'); }); Compliant solutionapp.post('/login', passport.authenticate('local', { failureRedirect: '/login' }), function(req, res) { let prevSession = req.session; req.session.regenerate((err) => { Object.assign(req.session, prevSession); res.redirect('/'); }); }); How does this work?The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process. Here’s how session fixation protection typically works:
By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process. ResourcesDocumentation
Articles & blog postsStandards |
||||||||||||
javascript:S3330 |
When a cookie is configured with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplecookie-session module: let session = cookieSession({ httpOnly: false,// Sensitive }); // Sensitive express-session module: const express = require('express'), const session = require('express-session'), let app = express() app.use(session({ cookie: { httpOnly: false // Sensitive } })), cookies module: let cookies = new Cookies(req, res, { keys: keys }); cookies.set('LastVisit', new Date().toISOString(), { httpOnly: false // Sensitive }); // Sensitive csurf module: const cookieParser = require('cookie-parser'); const csrf = require('csurf'); const express = require('express'); let csrfProtection = csrf({ cookie: { httpOnly: false }}); // Sensitive Compliant Solutioncookie-session module: let session = cookieSession({ httpOnly: true,// Compliant }); // Compliant express-session module: const express = require('express'); const session = require('express-session'); let app = express(); app.use(session({ cookie: { httpOnly: true // Compliant } })); cookies module: let cookies = new Cookies(req, res, { keys: keys }); cookies.set('LastVisit', new Date().toISOString(), { httpOnly: true // Compliant }); // Compliant csurf module: const cookieParser = require('cookie-parser'); const csrf = require('csurf'); const express = require('express'); let csrfProtection = csrf({ cookie: { httpOnly: true }}); // Compliant See
|
||||||||||||
javascript:S4784 |
This rule is deprecated; use S5852 instead. Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities: Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as
Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users. This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following
characters: Example: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesCheck whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using. Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2. Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection. Sensitive Code Exampleconst regex = /(a+)+b/; // Sensitive const regex2 = new RegExp("(a+)+b"); // Sensitive str.search("(a+)+b"); // Sensitive str.match("(a+)+b"); // Sensitive str.split("(a+)+b"); // Sensitive Note: String.matchAll does not raise any issue as it is not supported by NodeJS. ExceptionsSome corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: It is a good idea to test your regular expression if it has the same pattern on both side of a " See
|
||||||||||||
javascript:S5757 |
Log management is an important topic, especially for the security of a web application, to ensure user activity, including potential attackers, is recorded and available for an analyst to understand what’s happened on the web application in case of malicious activities. Retention of specific logs for a defined period of time is often necessary to comply with regulations such as GDPR, PCI DSS and others. However, to protect user’s privacy, certain informations are forbidden or strongly discouraged from being logged, such as user passwords or credit card numbers, which obviously should not be stored or at least not in clear text. Ask Yourself WhetherIn a production environment:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesLoggers should be configured with a list of confidential, personal information that will be hidden/masked or removed from logs. Sensitive Code ExampleWith Signale log management framework the code is sensitive when an empty list of secrets is defined: const { Signale } = require('signale'); const CREDIT_CARD_NUMBERS = fetchFromWebForm() // here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance const options = { secrets: [] // empty list of secrets }; const logger = new Signale(options); // Sensitive CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) { logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER); }); Compliant SolutionWith Signale log management framework it is possible to define a list of secrets that will be hidden in logs: const { Signale } = require('signale'); const CREDIT_CARD_NUMBERS = fetchFromWebForm() // here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance const options = { secrets: ["([0-9]{4}-?)+"] }; const logger = new Signale(options); // Compliant CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) { logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER); }); See |
||||||||||||
javascript:S2255 |
This rule is deprecated, and will eventually be removed. Using cookies is security-sensitive. It has led in the past to the following vulnerabilities: Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed. This rule flags code that writes cookies. Ask Yourself Whether
You are at risk if you answered yes to this question. Recommended Secure Coding PracticesCookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session. Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed. Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies. As a side note, every information read from a cookie should be Sanitized. Sensitive Code Example// === Built-in NodeJS modules === const http = require('http'); const https = require('https'); http.createServer(function(req, res) { res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive }); https.createServer(function(req, res) { res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive }); // === ExpressJS === const express = require('express'); const app = express(); app.use(function(req, res, next) { res.cookie('name', 'John'); // Sensitive }); // === In browser === // Set cookie document.cookie = "name=John"; // Sensitive See
|
||||||||||||
javascript:S5759 |
Users often connect to web servers through HTTP proxies. Proxy can be configured to forward the client IP address via the IP address is a personal information which can identify a single user and thus impact his privacy. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesUser IP address should not be forwarded unless the application needs it, as part of an authentication, authorization scheme or log management for examples. Sensitive Code Examplevar httpProxy = require('http-proxy'); httpProxy.createProxyServer({target:'http://localhost:9000', xfwd:true}) // Noncompliant .listen(8000); var express = require('express'); const { createProxyMiddleware } = require('http-proxy-middleware'); const app = express(); app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true, xfwd: true })); // Noncompliant app.listen(3000); Compliant Solutionvar httpProxy = require('http-proxy'); // By default xfwd option is false httpProxy.createProxyServer({target:'http://localhost:9000'}) // Compliant .listen(8000); var express = require('express'); const { createProxyMiddleware } = require('http-proxy-middleware'); const app = express(); // By default xfwd option is false app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true})); // Compliant app.listen(3000); See
|
||||||||||||
javascript:S4790 |
Cryptographic hash algorithms such as Ask Yourself WhetherThe hashed value is used in a security context like:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSafer alternatives, such as Sensitive Code Exampleconst crypto = require("crypto"); const hash = crypto.createHash('sha1'); // Sensitive Compliant Solutionconst crypto = require("crypto"); const hash = crypto.createHash('sha512'); // Compliant See
|
||||||||||||
javascript:S5527 |
This vulnerability allows attackers to impersonate a trusted host. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security. When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. To do so, an attacker would obtain a valid certificate authenticating What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. How to fix it in Node.jsCode examplesThe following code contains examples of disabled hostname validation. The hostname validation gets disabled by overriding Noncompliant code exampleconst https = require('node:https'); let options = { hostname: 'www.example.com', port: 443, path: '/', method: 'GET', checkServerIdentity: function() {}, // Noncompliant secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { res.on('data', (d) => { process.stdout.write(d); }); }); const tls = require('node:tls'); let options = { checkServerIdentity: function() {}, // Noncompliant secureProtocol: 'TLSv1_2_method' }; let socket = tls.connect(443, "www.example.com", options, () => { process.stdin.pipe(socket); process.stdin.resume(); }); Compliant solutionconst https = require('node:https'); let options = { hostname: 'www.example.com', port: 443, path: '/', method: 'GET', secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { res.on('data', (d) => { process.stdout.write(d); }); }); const tls = require('node:tls'); let options = { secureProtocol: 'TLSv1_2_method' }; let socket = tls.connect(443, "www.example.com", options, () => { process.stdin.pipe(socket); process.stdin.resume(); }); How does this work?To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate. Use valid certificatesIf a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues. Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself. In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:
ResourcesStandards
|
||||||||||||
javascript:S2755 |
This vulnerability allows the usage of external entities in XML. Why is this an issue?External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack. What is the potential impact?Exposing sensitive dataOne significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information. Exhausting system resourcesAnother consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience. Forging requestsXXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure. How to fix it in libxmljsCode examplesThe following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed. Noncompliant code examplevar libxmljs = require('libxmljs'); var fs = require('fs'); var xml = fs.readFileSync('xxe.xml', 'utf8'); libxmljs.parseXmlString(xml, { noblanks: true, noent: true, // Noncompliant nocdata: true }); Compliant solution
var libxmljs = require('libxmljs'); var fs = require('fs'); var xml = fs.readFileSync('xxe.xml', 'utf8'); libxmljs.parseXmlString(xml); How does this work?Disable external entitiesThe most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework. If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved
during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are
processed. ResourcesStandards |
||||||||||||
javascript:S4817 |
This rule is deprecated, and will eventually be removed. Executing XPATH expressions is security-sensitive. It has led in the past to the following vulnerabilities: User-provided data such as URL parameters should always be considered as untrusted and tainted. Constructing XPath expressions directly from tainted data enables attackers to inject specially crafted values that changes the initial meaning of the expression itself. Successful XPath injections attacks can read sensitive information from the XML document. Ask Yourself Whether
You are at risk if you answered yes to this question. Recommended Secure Coding PracticesSanitize any user input before using it in an XPATH expression. Sensitive Code Example// === Server side === var xpath = require('xpath'); var xmldom = require('xmldom'); var doc = new xmldom.DOMParser().parseFromString(xml); var nodes = xpath.select(userinput, doc); // Sensitive var node = xpath.select1(userinput, doc); // Sensitive // === Client side === // Chrome, Firefox, Edge, Opera, and Safari use the evaluate() method to select nodes: var nodes = document.evaluate(userinput, xmlDoc, null, XPathResult.ANY_TYPE, null); // Sensitive // Internet Explorer uses its own methods to select nodes: var nodes = xmlDoc.selectNodes(userinput); // Sensitive var node = xmlDoc.SelectSingleNode(userinput); // Sensitive See |
||||||||||||
javascript:S4818 |
This rule is deprecated, and will eventually be removed. Using sockets is security-sensitive. It has led in the past to the following vulnerabilities: Sockets are vulnerable in multiple ways:
This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleconst net = require('net'); var socket = new net.Socket(); // Sensitive socket.connect(80, 'google.com'); // net.createConnection creates a new net.Socket, initiates connection with socket.connect(), then returns the net.Socket that starts the connection net.createConnection({ port: port }, () => {}); // Sensitive // net.connect is an alias to net.createConnection net.connect({ port: port }, () => {}); // Sensitive See |
||||||||||||
javascript:S1523 |
Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities: Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security. This rule raises issues on calls to The rule also flags string literals starting with Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesRegarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser). Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way. Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer. Sensitive Code Examplelet value = eval('obj.' + propName); // Sensitive let func = Function('obj' + propName); // Sensitive location.href = 'javascript:void(0)'; // Sensitive ExceptionsThis rule will not raise an issue when the argument of the See |
||||||||||||
javascript:S1525 |
This rule is deprecated; use S4507 instead. Why is this an issue?The debugger statement can be placed anywhere in procedures to suspend execution. Using the debugger statement is similar to setting a breakpoint in the code. By definition such statement must absolutely be removed from the source code to prevent any unexpected behavior or added vulnerability to attacks in production. Noncompliant code examplefor (i = 1; i<5; i++) { // Print i to the Output window. Debug.write("loop index is " + i); // Wait for user to resume. debugger; } Compliant solutionfor (i = 1; i<5; i++) { // Print i to the Output window. Debug.write("loop index is " + i); } Resources |
||||||||||||
javascript:S2612 |
In Unix file system permissions, the " Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesThe most restrictive possible permissions should be assigned to files and directories. Sensitive Code ExampleNode.js const fs = require('fs'); fs.chmodSync("/tmp/fs", 0o777); // Sensitive const fs = require('fs'); const fsPromises = fs.promises; fsPromises.chmod("/tmp/fsPromises", 0o777); // Sensitive const fs = require('fs'); const fsPromises = fs.promises async function fileHandler() { let filehandle; try { filehandle = fsPromises.open('/tmp/fsPromises', 'r'); filehandle.chmod(0o777); // Sensitive } finally { if (filehandle !== undefined) filehandle.close(); } } Node.js const process = require('process'); process.umask(0o000); // Sensitive Compliant SolutionNode.js const fs = require('fs'); fs.chmodSync("/tmp/fs", 0o770); // Compliant const fs = require('fs'); const fsPromises = fs.promises; fsPromises.chmod("/tmp/fsPromises", 0o770); // Compliant const fs = require('fs'); const fsPromises = fs.promises async function fileHandler() { let filehandle; try { filehandle = fsPromises.open('/tmp/fsPromises', 'r'); filehandle.chmod(0o770); // Compliant } finally { if (filehandle !== undefined) filehandle.close(); } } Node.js const process = require('process'); process.umask(0o007); // Compliant See
|
||||||||||||
javascript:S4721 |
Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesUse functions that don’t spawn a shell. Sensitive Code Exampleconst cp = require('child_process'); // A shell will be spawn in these following cases: cp.exec(cmd); // Sensitive cp.execSync(cmd); // Sensitive cp.spawn(cmd, { shell: true }); // Sensitive cp.spawnSync(cmd, { shell: true }); // Sensitive cp.execFile(cmd, { shell: true }); // Sensitive cp.execFileSync(cmd, { shell: true }); // Sensitive Compliant Solutionconst cp = require('child_process'); cp.spawnSync("/usr/bin/file.exe", { shell: false }); // Compliant See |
||||||||||||
javascript:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code Exampleip = "192.168.12.42"; // Sensitive const net = require('net'); var client = new net.Socket(); client.connect(80, ip, function() { // ... }); Compliant Solutionip = process.env.IP_ADDRESS; // Compliant const net = require('net'); var client = new net.Socket(); client.connect(80, ip, function() { // ... }); ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See |
||||||||||||
javascript:S4823 |
This rule is deprecated, and will eventually be removed. Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities: Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized. Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure. This rule raises an issue when on every program entry points ( Ask Yourself Whether
If you answered yes to any of these questions you are at risk. Recommended Secure Coding PracticesSanitize all command line arguments before using them. Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information. Sensitive Code Example// The process object is a global that provides information about, and control over, the current Node.js process var param = process.argv[2]; // Sensitive: check how the argument is used console.log('Param: ' + param); See |
||||||||||||
javascript:S4829 |
This rule is deprecated, and will eventually be removed. Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities: It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated. This rule flags code that reads from the standard input. Ask Yourself Whether
You are at risk if you answered yes to this question. Recommended Secure Coding PracticesSanitize all data read from the standard input before using it. Sensitive Code Example// The process object is a global that provides information about, and control over, the current Node.js process // All uses of process.stdin are security-sensitive and should be reviewed process.stdin.on('readable', () => { const chunk = process.stdin.read(); // Sensitive if (chunk !== null) { dosomething(chunk); } }); const readline = require('readline'); readline.createInterface({ input: process.stdin // Sensitive }).on('line', (input) => { dosomething(input); }); See |
||||||||||||
javascript:S1442 |
This rule is deprecated; use S4507 instead. Why is this an issue?
Noncompliant code exampleif(unexpectedCondition) { alert("Unexpected Condition"); } Resources |
||||||||||||
javascript:S4830 |
This vulnerability makes it possible that an encrypted communication is intercepted. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be. When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. Loss of data integrityWhen TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system. How to fix it in Node.jsCode examplesThe following code contains examples of disabled certificate validation. The certificate validation gets disabled by setting Noncompliant code exampleconst https = require('node:https'); let options = { hostname: 'www.example.com', port: 443, path: '/', method: 'GET', rejectUnauthorized: false, secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { res.on('data', (d) => { process.stdout.write(d); }); }); // Noncompliant const tls = require('node:tls'); let options = { rejectUnauthorized: false, secureProtocol: 'TLSv1_2_method' }; let socket = tls.connect(443, "www.example.com", options, () => { process.stdin.pipe(socket); process.stdin.resume(); }); // Noncompliant Compliant solutionconst https = require('node:https'); let options = { hostname: 'www.example.com', port: 443, path: '/', method: 'GET', secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { res.on('data', (d) => { process.stdout.write(d); }); }); const tls = require('node:tls'); let options = { secureProtocol: 'TLSv1_2_method' }; let socket = tls.connect(443, "www.example.com", options, () => { process.stdin.pipe(socket); process.stdin.resume(); }); How does this work?Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation. To avoid running into problems with invalid certificates, consider the following sections. Using trusted certificatesIf possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration. Working with self-signed certificates or non-standard CAsIn some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store. ResourcesStandards
|
||||||||||||
javascript:S6265 |
Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users. The following canned ACLs are security-sensitive:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege policy, i.e., to only grant users the necessary permissions for their required tasks. In the
context of canned ACL, set it to Sensitive Code ExampleAll users, either authenticated or anonymous, have read and write permissions with the const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'bucket', { accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive }); new s3deploy.BucketDeployment(this, 'DeployWebsite', { accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive }); Compliant SolutionWith the const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'bucket', { accessControl: s3.BucketAccessControl.PRIVATE }); new s3deploy.BucketDeployment(this, 'DeployWebsite', { accessControl: s3.BucketAccessControl.PRIVATE }); See
|
||||||||||||
javascript:S6268 |
Angular prevents XSS vulnerabilities by treating all values as untrusted by default. Untrusted values are systematically sanitized by the framework before they are inserted into the DOM. Still, developers have the ability to manually mark a value as trusted if they are sure that the value is already sanitized. Accidentally trusting malicious data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleimport { Component, OnInit } from '@angular/core'; import { DomSanitizer, SafeHtml } from "@angular/platform-browser"; import { ActivatedRoute } from '@angular/router'; @Component({ template: '<div id="hello" [innerHTML]="hello"></div>' }) export class HelloComponent implements OnInit { hello: SafeHtml; constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { } ngOnInit(): void { let name = this.route.snapshot.queryParams.name; let html = "<h1>Hello " + name + "</h1>"; this.hello = this.sanitizer.bypassSecurityTrustHtml(html); // Sensitive } } Compliant Solutionimport { Component, OnInit } from '@angular/core'; import { DomSanitizer } from "@angular/platform-browser"; import { ActivatedRoute } from '@angular/router'; @Component({ template: '<div id="hello"><h1>Hello {{name}}</h1></div>', }) export class HelloComponent implements OnInit { name: string; constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { } ngOnInit(): void { this.name = this.route.snapshot.queryParams.name; } } See |
||||||||||||
javascript:S5042 |
Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes). Ask Yourself WhetherArchives to expand are untrusted and:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFor tar module: const tar = require('tar'); tar.x({ // Sensitive file: 'foo.tar.gz' }); For adm-zip module: const AdmZip = require('adm-zip'); let zip = new AdmZip("./foo.zip"); zip.extractAllTo("."); // Sensitive For jszip module: const fs = require("fs"); const JSZip = require("jszip"); fs.readFile("foo.zip", function(err, data) { if (err) throw err; JSZip.loadAsync(data).then(function (zip) { // Sensitive zip.forEach(function (relativePath, zipEntry) { if (!zip.file(zipEntry.name)) { fs.mkdirSync(zipEntry.name); } else { zip.file(zipEntry.name).async('nodebuffer').then(function (content) { fs.writeFileSync(zipEntry.name, content); }); } }); }); }); For yauzl module const yauzl = require('yauzl'); yauzl.open('foo.zip', function (err, zipfile) { if (err) throw err; zipfile.on("entry", function(entry) { zipfile.openReadStream(entry, function(err, readStream) { if (err) throw err; // TODO: extract }); }); }); For extract-zip module: const extract = require('extract-zip') async function main() { let target = __dirname + '/test'; await extract('test.zip', { dir: target }); // Sensitive } main(); Compliant SolutionFor tar module: const tar = require('tar'); const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB let fileCount = 0; let totalSize = 0; tar.x({ file: 'foo.tar.gz', filter: (path, entry) => { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } totalSize += entry.size; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } return true; } }); For adm-zip module: const AdmZip = require('adm-zip'); const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB const THRESHOLD_RATIO = 10; let fileCount = 0; let totalSize = 0; let zip = new AdmZip("./foo.zip"); let zipEntries = zip.getEntries(); zipEntries.forEach(function(zipEntry) { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } let entrySize = zipEntry.getData().length; totalSize += entrySize; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } let compressionRatio = entrySize / zipEntry.header.compressedSize; if (compressionRatio > THRESHOLD_RATIO) { throw 'Reached max. compression ratio'; } if (!zipEntry.isDirectory) { zip.extractEntryTo(zipEntry.entryName, "."); } }); For jszip module: const fs = require("fs"); const pathmodule = require("path"); const JSZip = require("jszip"); const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB let fileCount = 0; let totalSize = 0; let targetDirectory = __dirname + '/archive_tmp'; fs.readFile("foo.zip", function(err, data) { if (err) throw err; JSZip.loadAsync(data).then(function (zip) { zip.forEach(function (relativePath, zipEntry) { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } // Prevent ZipSlip path traversal (S6096) const resolvedPath = pathmodule.join(targetDirectory, zipEntry.name); if (!resolvedPath.startsWith(targetDirectory)) { throw 'Path traversal detected'; } if (!zip.file(zipEntry.name)) { fs.mkdirSync(resolvedPath); } else { zip.file(zipEntry.name).async('nodebuffer').then(function (content) { totalSize += content.length; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } fs.writeFileSync(resolvedPath, content); }); } }); }); }); Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure. For yauzl module const yauzl = require('yauzl'); const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB const THRESHOLD_RATIO = 10; yauzl.open('foo.zip', function (err, zipfile) { if (err) throw err; let fileCount = 0; let totalSize = 0; zipfile.on("entry", function(entry) { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } // The uncompressedSize comes from the zip headers, so it might not be trustworthy. // Alternatively, calculate the size from the readStream. let entrySize = entry.uncompressedSize; totalSize += entrySize; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } if (entry.compressedSize > 0) { let compressionRatio = entrySize / entry.compressedSize; if (compressionRatio > THRESHOLD_RATIO) { throw 'Reached max. compression ratio'; } } zipfile.openReadStream(entry, function(err, readStream) { if (err) throw err; // TODO: extract }); }); }); Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure. For extract-zip module: const extract = require('extract-zip') const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB const THRESHOLD_RATIO = 10; async function main() { let fileCount = 0; let totalSize = 0; let target = __dirname + '/foo'; await extract('foo.zip', { dir: target, onEntry: function(entry, zipfile) { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } // The uncompressedSize comes from the zip headers, so it might not be trustworthy. // Alternatively, calculate the size from the readStream. let entrySize = entry.uncompressedSize; totalSize += entrySize; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } if (entry.compressedSize > 0) { let compressionRatio = entrySize / entry.compressedSize; if (compressionRatio > THRESHOLD_RATIO) { throw 'Reached max. compression ratio'; } } } }); } main(); See
|
||||||||||||
javascript:S6245 |
This rule is deprecated, and will eventually be removed. Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself. There are three SSE options:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys. Sensitive Code ExampleServer-side encryption is not used: const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'default' }); // Sensitive Bucket encryption is disabled by default. Compliant SolutionServer-side encryption with Amazon S3-Managed Keys is used: const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { encryption: s3.BucketEncryption.KMS_MANAGED }); # Alternatively with a KMS key managed by the user. new s3.Bucket(this, 'id', { encryption: s3.BucketEncryption.KMS_MANAGED, encryptionKey: access_key }); See
|
||||||||||||
javascript:S6249 |
By default, S3 buckets can be accessed through HTTP and HTTPs protocols. As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enforce HTTPS only access by setting Sensitive Code ExampleS3 bucket objects access through TLS is not enforced by default: const s3 = require('aws-cdk-lib/aws-s3'); const bucket = new s3.Bucket(this, 'example'); // Sensitive Compliant Solutionconst s3 = require('aws-cdk-lib/aws-s3'); const bucket = new s3.Bucket(this, 'example', { bucketName: 'example', versioned: true, publicReadAccess: false, enforceSSL: true }); See
|
||||||||||||
javascript:S6252 |
S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket. It can lead to unintentional or intentional information loss. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object. Sensitive Code Exampleconst s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', versioned: false // Sensitive }); The default value of Compliant Solutionconst s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', versioned: true }); See
|
||||||||||||
javascript:S6270 |
Resource-based policies granting access to all users can lead to information leakage. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges. Sensitive Code ExampleThis policy allows all users, including anonymous ones, to access an S3 bucket: import { aws_iam as iam } from 'aws-cdk-lib' import { aws_s3 as s3 } from 'aws-cdk-lib' const bucket = new s3.Bucket(this, "ExampleBucket") bucket.addToResourcePolicy(new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["s3:*"], resources: [bucket.arnForObjects("*")], principals: [new iam.AnyPrincipal()] // Sensitive })) Compliant SolutionThis policy allows only the authorized users: import { aws_iam as iam } from 'aws-cdk-lib' import { aws_s3 as s3 } from 'aws-cdk-lib' const bucket = new s3.Bucket(this, "ExampleBucket") bucket.addToResourcePolicy(new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["s3:*"], resources: [bucket.arnForObjects("*")], principals: [new iam.AccountRootPrincipal()] })) See
|
||||||||||||
javascript:S6275 |
Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade. Sensitive Code Exampleimport { Size } from 'aws-cdk-lib'; import { Volume } from 'aws-cdk-lib/aws-ec2'; new Volume(this, 'unencrypted-explicit', { availabilityZone: 'us-west-2a', size: Size.gibibytes(1), encrypted: false // Sensitive }); import { Size } from 'aws-cdk-lib'; import { Volume } from 'aws-cdk-lib/aws-ec2'; new Volume(this, 'unencrypted-implicit', { availabilityZone: 'eu-west-1a', size: Size.gibibytes(1), }); // Sensitive as encryption is disabled by default Compliant Solutionimport { Size } from 'aws-cdk-lib'; import { Volume } from 'aws-cdk-lib/aws-ec2'; new Volume(this, 'encrypted-explicit', { availabilityZone: 'eu-west-1a', size: Size.gibibytes(1), encrypted: true }); See |
||||||||||||
javascript:S2817 |
This rule is deprecated, and will eventually be removed. Why is this an issue?The Web SQL Database standard never saw the light of day. It was first formulated, then deprecated by the W3C and was only implemented in some browsers. (It is not supported in Firefox or IE.) Further, the use of a Web SQL Database poses security concerns, since you only need its name to access such a database. Noncompliant code examplevar db = window.openDatabase("myDb", "1.0", "Personal secrets stored here", 2*1024*1024); // Noncompliant Resources |
||||||||||||
javascript:S2819 |
Cross-origin communication allows different websites to interact with each other. This interaction is typically achieved through mechanisms like AJAX requests, WebSockets, or postMessage API. However, a vulnerability can arise when these communications are not properly secured by verifying their origins. Why is this an issue?Without origin verification, the target website cannot distinguish between legitimate requests from its own pages and malicious requests from an attacker’s site. The attacker can craft a malicious website or script that sends requests to a target website where the user is already authenticated. This vulnerability class is not about a single specific user input or action, but rather a series of actions that lead to an insecure cross-origin communication. What is the potential impact?The absence of origin verification during cross-origin communications can lead to serious security issues. Data BreachIf an attacker can successfully exploit this vulnerability, they may gain unauthorized access to sensitive data. For instance, a user’s personal information, financial details, or other confidential data could be exposed. This not only compromises the user’s privacy but can also lead to identity theft or financial loss. Unauthorized ActionsAn attacker could manipulate the communication between websites to perform actions on behalf of the user without their knowledge. This could range from making unauthorized purchases to changing user settings or even deleting accounts. How to fix itWhen sending a message, avoid using Code examplesNoncompliant code exampleWhen sending a message: var iframe = document.getElementById("testiframe"); iframe.contentWindow.postMessage("hello", "*"); // Noncompliant: * is used When receiving a message: window.addEventListener("message", function(event) { // Noncompliant: no checks are done on the origin property. console.log(event.data); }); Compliant solutionWhen sending a message: var iframe = document.getElementById("testiframe"); iframe.contentWindow.postMessage("hello", "https://secure.example.com"); When receiving a message: window.addEventListener("message", function(event) { if (event.origin !== "http://example.org") return; console.log(event.data) }); ResourcesDocumentation
Standards |
||||||||||||
javascript:S6281 |
By default S3 buckets are private, it means that only the bucket owner can access it. This access control can be relaxed with ACLs or policies. To prevent permissive policies or ACLs to be set on a S3 bucket the following booleans settings can be enabled:
The other attribute However, all of those options can be enabled by setting the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to configure:
Sensitive Code ExampleBy default, when not set, the const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket' }); // Sensitive This const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', blockPublicAccess: new s3.BlockPublicAccess({ blockPublicAcls : false, // Sensitive blockPublicPolicy : true, ignorePublicAcls : true, restrictPublicBuckets : true }) }); The attribute const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', blockPublicAccess: s3.BlockPublicAccess.BLOCK_ACLS // Sensitive }); Compliant SolutionThis const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL }); A similar configuration to the one above can be obtained by setting all parameters of the const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', blockPublicAccess: new s3.BlockPublicAccess({ blockPublicAcls : true, blockPublicPolicy : true, ignorePublicAcls : true, restrictPublicBuckets : true }) }); See
|
||||||||||||
javascript:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplevar mysql = require('mysql'); var connection = mysql.createConnection( { host:'localhost', user: "admin", database: "project", password: "mypassword", // sensitive multipleStatements: true }); connection.connect(); Compliant Solutionvar mysql = require('mysql'); var connection = mysql.createConnection({ host: process.env.MYSQL_URL, user: process.env.MYSQL_USERNAME, password: process.env.MYSQL_PASSWORD, database: process.env.MYSQL_DATABASE }); connection.connect(); See
|
||||||||||||
javascript:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code Exampleurl = "http://example.com"; // Sensitive url = "ftp://anonymous@example.com"; // Sensitive url = "telnet://anonymous@example.com"; // Sensitive For nodemailer: const nodemailer = require("nodemailer"); let transporter = nodemailer.createTransport({ secure: false, // Sensitive requireTLS: false // Sensitive }); const nodemailer = require("nodemailer"); let transporter = nodemailer.createTransport({}); // Sensitive For ftp: var Client = require('ftp'); var c = new Client(); c.connect({ 'secure': false // Sensitive }); For telnet-client: const Telnet = require('telnet-client'); // Sensitive For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer: import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; const alb = new ApplicationLoadBalancer(this, 'ALB', { vpc: vpc, internetFacing: true }); alb.addListener('listener-http-default', { port: 8080, open: true }); // Sensitive alb.addListener('listener-http-explicit', { protocol: ApplicationProtocol.HTTP, // Sensitive port: 8080, open: true }); For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener: import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new ApplicationListener(this, 'listener-http-explicit-constructor', { loadBalancer: alb, protocol: ApplicationProtocol.HTTP, // Sensitive port: 8080, open: true }); For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer: import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; const nlb = new NetworkLoadBalancer(this, 'nlb', { vpc: vpc, internetFacing: true }); var listenerNLB = nlb.addListener('listener-tcp-default', { port: 1234 }); // Sensitive listenerNLB = nlb.addListener('listener-tcp-explicit', { protocol: Protocol.TCP, // Sensitive port: 1234 }); For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener: import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new NetworkListener(this, 'listener-tcp-explicit-constructor', { loadBalancer: nlb, protocol: Protocol.TCP, // Sensitive port: 8080 }); For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener: import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new CfnListener(this, 'listener-http', { defaultActions: defaultActions, loadBalancerArn: alb.loadBalancerArn, protocol: "HTTP", // Sensitive port: 80 }); new CfnListener(this, 'listener-tcp', { defaultActions: defaultActions, loadBalancerArn: alb.loadBalancerArn, protocol: "TCP", // Sensitive port: 80 }); For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer: import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing'; new CfnLoadBalancer(this, 'elb-tcp', { listeners: [{ instancePort: '1000', loadBalancerPort: '1000', protocol: 'tcp' // Sensitive }] }); new CfnLoadBalancer(this, 'elb-http', { listeners: [{ instancePort: '1000', loadBalancerPort: '1000', protocol: 'http' // Sensitive }] }); For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer: import { LoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing'; const loadBalancer = new LoadBalancer(this, 'elb-tcp-dict', { vpc, internetFacing: true, healthCheck: { port: 80, }, listeners: [ { externalPort:10000, externalProtocol: LoadBalancingProtocol.TCP, // Sensitive internalPort:10000 }] }); loadBalancer.addListener({ externalPort:10001, externalProtocol:LoadBalancingProtocol.TCP, // Sensitive internalPort:10001 }); loadBalancer.addListener({ externalPort:10002, externalProtocol:LoadBalancingProtocol.HTTP, // Sensitive internalPort:10002 }); For aws-cdk-lib.aws-elasticache.CfnReplicationGroup: import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache'; new CfnReplicationGroup(this, 'unencrypted-implicit', { replicationGroupDescription: 'exampleDescription' }); // Sensitive new CfnReplicationGroup(this, 'unencrypted-explicit', { replicationGroupDescription: 'exampleDescription', transitEncryptionEnabled: false // Sensitive }); For aws-cdk-lib.aws-kinesis.CfnStream: import { CfnStream } from 'aws-cdk-lib/aws-kinesis'; new CfnStream(this, 'cfnstream-implicit-unencrytped', undefined); // Sensitive new CfnStream(this, 'cfnstream-explicit-unencrytped', { streamEncryption: undefined // Sensitive }); For aws-cdk-lib.aws-kinesis.Stream: import { Stream } from 'aws-cdk-lib/aws-kinesis'; new Stream(this, 'stream-explicit-unencrypted', { encryption: StreamEncryption.UNENCRYPTED // Sensitive }); Compliant Solutionurl = "https://example.com"; url = "sftp://anonymous@example.com"; url = "ssh://anonymous@example.com"; For nodemailer one of the following options must be set: const nodemailer = require("nodemailer"); let transporter = nodemailer.createTransport({ secure: true, requireTLS: true, port: 465, secured: true }); For ftp: var Client = require('ftp'); var c = new Client(); c.connect({ 'secure': true }); For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer: import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; const alb = new ApplicationLoadBalancer(this, 'ALB', { vpc: vpc, internetFacing: true }); alb.addListener('listener-https-explicit', { protocol: ApplicationProtocol.HTTPS, port: 8080, open: true, certificates: [certificate] }); alb.addListener('listener-https-implicit', { port: 8080, open: true, certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener: import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new ApplicationListener(this, 'listener-https-explicit', { loadBalancer: loadBalancer, protocol: ApplicationProtocol.HTTPS, port: 8080, open: true, certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer: import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; const nlb = new NetworkLoadBalancer(this, 'nlb', { vpc: vpc, internetFacing: true }); nlb.addListener('listener-tls-explicit', { protocol: Protocol.TLS, port: 1234, certificates: [certificate] }); nlb.addListener('listener-tls-implicit', { port: 1234, certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener: import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new NetworkListener(this, 'listener-tls-explicit', { loadBalancer: loadBalancer, protocol: Protocol.TLS, port: 8080, certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener: import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new CfnListener(this, 'listener-https', { defaultActions: defaultActions, loadBalancerArn: loadBalancerArn, protocol: "HTTPS", port: 80 certificates: [certificate] }); new CfnListener(this, 'listener-tls', { defaultActions: defaultActions, loadBalancerArn: loadBalancerArn, protocol: "TLS", port: 80 certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer: import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing'; new CfnLoadBalancer(this, 'elb-ssl', { listeners: [{ instancePort: '1000', loadBalancerPort: '1000', protocol: 'ssl', sslCertificateId: sslCertificateId }] }); new CfnLoadBalancer(this, 'elb-https', { listeners: [{ instancePort: '1000', loadBalancerPort: '1000', protocol: 'https', sslCertificateId: sslCertificateId }] }); For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer: import { LoadBalancer, LoadBalancingProtocol } from 'aws-cdk-lib/aws-elasticloadbalancing'; const lb = new LoadBalancer(this, 'elb-ssl', { vpc, internetFacing: true, healthCheck: { port: 80, }, listeners: [ { externalPort:10000, externalProtocol:LoadBalancingProtocol.SSL, internalPort:10000 }] }); lb.addListener({ externalPort:10001, externalProtocol:LoadBalancingProtocol.SSL, internalPort:10001 }); lb.addListener({ externalPort:10002, externalProtocol:LoadBalancingProtocol.HTTPS, internalPort:10002 }); For aws-cdk-lib.aws-elasticache.CfnReplicationGroup: import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache'; new CfnReplicationGroup(this, 'encrypted-explicit', { replicationGroupDescription: 'example', transitEncryptionEnabled: true }); For aws-cdk-lib.aws-kinesis.Stream: import { Stream } from 'aws-cdk-lib/aws-kinesis'; new Stream(this, 'stream-implicit-encrypted'); new Stream(this, 'stream-explicit-encrypted-selfmanaged', { encryption: StreamEncryption.KMS, encryptionKey: encryptionKey, }); new Stream(this, 'stream-explicit-encrypted-managed', { encryption: StreamEncryption.MANAGED }); For aws-cdk-lib.aws-kinesis.CfnStream: import { CfnStream } from 'aws-cdk-lib/aws-kinesis'; new CfnStream(this, 'cfnstream-explicit-encrypted', { streamEncryption: { encryptionType: encryptionType, keyId: encryptionKey.keyId, } }); ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
javascript:S6299 |
Vue.js framework prevents XSS vulnerabilities by automatically escaping HTML contents with the use of native API browsers like
It’s still possible to explicity use Ask Yourself WhetherThe application needs to render HTML content which:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleWhen using Vue.js templates, the <div v-html="htmlContent"></div> <!-- Noncompliant --> When using a rendering function, the Vue.component('element', { render: function (createElement) { return createElement( 'div', { domProps: { innerHTML: this.htmlContent, // Noncompliant } } ); }, }); When using JSX, the <div domPropsInnerHTML={this.htmlContent}></div> <!-- Noncompliant --> Compliant SolutionWhen using Vue.js templates, putting the content as a child node of the element is safe: <div>{{ htmlContent }}</div> When using a rendering function, using the Vue.component('element', { render: function (createElement) { return createElement( 'div', { domProps: { innerText: this.htmlContent, } }, this.htmlContent // Child node ); }, }); When using JSX, putting the content as a child node of the element is safe: <div>{this.htmlContent}</div> See |
||||||||||||
javascript:S6303 |
Using unencrypted RDS DB resources exposes data to unauthorized access. This situation can occur in a variety of scenarios, such as:
After a successful intrusion, the underlying applications are exposed to:
AWS-managed encryption at rest reduces this risk with a simple switch. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine. Sensitive Code ExampleFor import { aws_rds as rds } from 'aws-cdk-lib'; new rds.CfnDBCluster(this, 'example', { storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; new rds.CfnDBInstance(this, 'example', { storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; import { aws_ec2 as ec2 } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; const cluster = new rds.DatabaseCluster(this, 'example', { engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }), instanceProps: { vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS, }, vpc, }, storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; new rds.DatabaseClusterFromSnapshot(this, 'example', { engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }), instanceProps: { vpc, }, snapshotIdentifier: 'exampleSnapshot', storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; new rds.DatabaseInstance(this, 'example', { engine: rds.DatabaseInstanceEngine.POSTGRES, vpc, storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const sourceInstance: rds.DatabaseInstance; new rds.DatabaseInstanceReadReplica(this, 'example', { sourceDatabaseInstance: sourceInstance, instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE), vpc, storageEncrypted: false, // Sensitive }); Compliant SolutionFor import { aws_rds as rds } from 'aws-cdk-lib'; new rds.CfnDBCluster(this, 'example', { storageEncrypted: true, }); For import { aws_rds as rds } from 'aws-cdk-lib'; new rds.CfnDBInstance(this, 'example', { storageEncrypted: true, }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; const cluster = new rds.DatabaseCluster(this, 'example', { engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }), instanceProps: { vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS, }, vpc, }, storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; new rds.DatabaseClusterFromSnapshot(this, 'example', { engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }), instanceProps: { vpc, }, snapshotIdentifier: 'exampleSnapshot', storageEncrypted: true, }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; new rds.DatabaseInstance(this, 'example', { engine: rds.DatabaseInstanceEngine.POSTGRES, vpc, storageEncrypted: true, }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const sourceInstance: rds.DatabaseInstance; new rds.DatabaseInstanceReadReplica(this, 'example', { sourceDatabaseInstance: sourceInstance, instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE), vpc, storageEncrypted: true, }); See
|
||||||||||||
javascript:S6304 |
A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur. Ask Yourself WhetherThe AWS account has more than one resource with different levels of sensitivity. A risk exists if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors. Sensitive Code ExampleThe wildcard import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyDocument({ statements: [ new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["iam:CreatePolicyVersion"], resources: ["*"] // Sensitive }) ] }) Compliant SolutionRestrict the update permission to the appropriate subset of policies: import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyDocument({ statements: [ new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["iam:CreatePolicyVersion"], resources: ["arn:aws:iam:::policy/team1/*"] }) ] }) Exceptions
See
|
||||||||||||
javascript:S5691 |
Hidden files are created automatically by many tools to save user-preferences, well-known examples are Outside of the user environment, hidden files are sensitive because they are used to store privacy-related information or even hard-coded secrets. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleExpress.js serve-static middleware: let serveStatic = require("serve-static"); let app = express(); let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'allow'}); // Sensitive app.use(serveStaticMiddleware); Compliant SolutionExpress.js serve-static middleware: let serveStatic = require("serve-static"); let app = express(); let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'ignore'}); // Compliant: ignore or deny are recommended values let serveStaticDefault = serveStatic('public', { 'index': false}); // Compliant: by default, "dotfiles" (file or directory that begins with a dot) are not served (with the exception that files within a directory that begins with a dot are not ignored), see serve-static module documentation app.use(serveStaticMiddleware); See
|
||||||||||||
javascript:S5693 |
Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to customize the rule with the limit values that correspond to the web application. Sensitive Code Exampleformidable file upload module: const form = new Formidable(); form.maxFileSize = 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB const formDefault = new Formidable(); // Sensitive, the default value is 200MB multer (Express.js middleware) file upload module: let diskUpload = multer({ storage: diskStorage, limits: { fileSize: 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB } }); let diskUploadUnlimited = multer({ // Sensitive: the default value is no limit storage: diskStorage, }); body-parser module: // 4MB is more than the recommended limit of 2MB for non-file-upload requests let jsonParser = bodyParser.json({ limit: "4mb" }); // Sensitive let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "4mb" }); // Sensitive Compliant Solutionformidable file upload module: const form = new Formidable(); form.maxFileSize = 8000000; // Compliant: 8MB multer (Express.js middleware) file upload module: let diskUpload = multer({ storage: diskStorage, limits: { fileSize: 8000000 // Compliant: 8MB } }); body-parser module: let jsonParser = bodyParser.json(); // Compliant, when the limit is not defined, the default value is set to 100kb let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "2mb" }); // Compliant See
|
||||||||||||
javascript:S6302 |
A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information. Ask Yourself WhetherIdentities obtaining all the permissions:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used. Sensitive Code ExampleA customer-managed policy that grants all permissions by using the wildcard (*) in the import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["*"], // Sensitive resources: ["arn:aws:iam:::user/*"], }) Compliant SolutionA customer-managed policy that grants only the required permissions: import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["iam:GetAccountSummary"], resources: ["arn:aws:iam:::user/*"], }) See
|
||||||||||||
javascript:S6308 |
Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated. To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:
Thus, adversaries cannot access the data if they gain physical access to the storage medium. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to encrypt OpenSearch domains that contain sensitive information. OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary. Sensitive Code ExampleFor aws-cdk-lib.aws_opensearchservice.Domain: import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib'; const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', { version: EngineVersion.OPENSEARCH_1_3, }); // Sensitive, encryption must be explicitly enabled For aws-cdk-lib.aws_opensearchservice.CfnDomain: import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib'; const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', { engineVersion: 'OpenSearch_1.3', }); // Sensitive, encryption must be explicitly enabled Compliant SolutionFor aws-cdk-lib.aws_opensearchservice.Domain: import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib'; const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', { version: EngineVersion.OPENSEARCH_1_3, encryptionAtRest: { enabled: true, }, }); For aws-cdk-lib.aws_opensearchservice.CfnDomain: import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib'; const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', { engineVersion: 'OpenSearch_1.3', encryptionAtRestOptions: { enabled: true, }, }); See
|
||||||||||||
javascript:S2077 |
Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example// === MySQL === const mysql = require('mysql'); const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db }); mycon.connect(function(err) { mycon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive }); // === PostgreSQL === const pg = require('pg'); const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db }); pgcon.connect(); pgcon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive Compliant Solution// === MySQL === const mysql = require('mysql'); const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db }); mycon.connect(function(err) { mycon.query('SELECT name FROM users WHERE id = ?', [userinput], (err, res) => {}); }); // === PostgreSQL === const pg = require('pg'); const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db }); pgcon.connect(); pgcon.query('SELECT name FROM users WHERE id = $1', [userinput], (err, res) => {}); ExceptionsThis rule’s current implementation does not follow variables. It will only detect SQL queries which are formatted directly in the function call. const sql = 'SELECT * FROM users WHERE id = ' + userinput; mycon.query(sql, (err, res) => {}); // Sensitive but no issue is raised. See
|
||||||||||||
javascript:S6317 |
Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access. Why is this an issue?AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources. For such policies, it is easy to define very broad permissions (by using wildcard If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities. What is the potential impact?AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope. Privilege escalationWhen IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities. For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account. How to fix it in AWS CDKCode examplesIn this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges. Noncompliant code exampleimport { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyDocument({ statements: [new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["lambda:UpdateFunctionCode"], resources: ["*"], // Noncompliant })], }); Compliant solutionThe policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed. import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyDocument({ statements: [new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["lambda:UpdateFunctionCode"], resources: ["arn:aws:lambda:us-east-2:123456789012:function:my-function:1"], })], }); How does this work?Principle of least privilegeWhen creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else. To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used. ResourcesDocumentation
Articles & blog posts
Standards |
||||||||||||
javascript:S6319 |
Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary. Sensitive Code ExampleFor import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker'; new CfnNotebookInstance(this, 'example', { instanceType: 'instanceType', roleArn: 'roleArn' }); // Sensitive Compliant SolutionFor import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker'; const encryptionKey = new Key(this, 'example', { enableKeyRotation: true, }); new CfnNotebookInstance(this, 'example', { instanceType: 'instanceType', roleArn: 'roleArn', kmsKeyId: encryptionKey.keyId }); See |
||||||||||||
javascript:S5689 |
Disclosure of version information, usually overlooked by developers but disclosed by default by the systems and frameworks in use, can pose a significant security risk depending on the production environement. Once this information is public, attackers can use it to identify potential security holes or vulnerabilities specific to that version. Furthermore, if the published version information indicates the use of outdated or unsupported software, it becomes easier for attackers to exploit known vulnerabilities. They can search for published vulnerabilities related to that version and launch attacks that specifically target those vulnerabilities. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesIn general, it is recommended to keep internal technical information within internal systems to control what attackers know about the underlying architectures. This is known as the "need to know" principle. The most effective solution is to remove version information disclosure from what end users can see, such as the "x-powered-by" header. Disabling the server signature provides additional protection by reducing the amount of information available to attackers. Note, however, that
this does not provide as much protection as regular updates and patches. Sensitive Code ExampleIn Express.js, version information is disclosed by default in the let express = require('express'); let example = express(); // Sensitive example.get('/', function (req, res) { res.send('example') }); Compliant Solution
let express = require('express'); let example = express(); example.disable("x-powered-by"); Or with helmet’s hidePoweredBy middleware: let helmet = require("helmet"); let example = express(); example.use(helmet.hidePoweredBy()); See
|
||||||||||||
javascript:S5148 |
A newly opened window having access back to the originating window could allow basic phishing attacks (the For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesUse Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ Sensitive Code Examplewindow.open("https://example.com/dangerous"); // Sensitive Compliant Solutionwindow.open("https://example.com/dangerous", "WindowName", "noopener"); See |
||||||||||||
javascript:S5443 |
Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like
In the past, it has led to the following vulnerabilities: This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleconst fs = require('fs'); let tmp_file = "/tmp/temporary_file"; // Sensitive fs.readFile(tmp_file, 'utf8', function (err, data) { // ... }); const fs = require('fs'); let tmp_dir = process.env.TMPDIR; // Sensitive fs.readFile(tmp_dir + "/temporary_file", 'utf8', function (err, data) { // ... }); Compliant Solutionconst tmp = require('tmp'); const tmpobj = tmp.fileSync(); // Compliant See
|
||||||||||||
javascript:S4036 |
When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesFully qualified/absolute path should be used to specify the OS command to execute. Sensitive Code Exampleconst cp = require('child_process'); cp.exec('file.exe'); // Sensitive Compliant Solutionconst cp = require('child_process'); cp.exec('/usr/bin/file.exe'); // Compliant See |
||||||||||||
javascript:S6321 |
Why is this an issue?Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and
outbound traffic. What is the potential impact?Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system. Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system. How to fix itIt is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers. Code examplesNoncompliant code exampleFor aws-cdk-lib.aws_ec2.Instance and other constructs
that support a import {aws_ec2 as ec2} from 'aws-cdk-lib' const instance = new ec2.Instance(this, "default-own-security-group",{ instanceType: nanoT2, machineImage: ec2.MachineImage.latestAmazonLinux(), vpc: vpc, instanceName: "test-instance" }) instance.connections.allowFrom( ec2.Peer.anyIpv4(), // Noncompliant ec2.Port.tcp(22), /*description*/ "Allows SSH from all IPv4" ) For aws-cdk-lib.aws_ec2.SecurityGroup import {aws_ec2 as ec2} from 'aws-cdk-lib' const securityGroup = new ec2.SecurityGroup(this, "custom-security-group", { vpc: vpc }) securityGroup.addIngressRule( ec2.Peer.anyIpv4(), // Noncompliant ec2.Port.tcpRange(1, 1024) ) For aws-cdk-lib.aws_ec2.CfnSecurityGroup import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnSecurityGroup( this, "cfn-based-security-group", { groupDescription: "cfn based security group", groupName: "cfn-based-security-group", vpcId: vpc.vpcId, securityGroupIngress: [ { ipProtocol: "6", cidrIp: "0.0.0.0/0", // Noncompliant fromPort: 22, toPort: 22 } ] } ) For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnSecurityGroupIngress( // Noncompliant this, "ingress-all-ip-tcp-ssh", { ipProtocol: "tcp", cidrIp: "0.0.0.0/0", fromPort: 22, toPort: 22, groupId: securityGroup.attrGroupId }) Compliant solutionFor aws-cdk-lib.aws_ec2.Instance and other constructs
that support a import {aws_ec2 as ec2} from 'aws-cdk-lib' const instance = new ec2.Instance(this, "default-own-security-group",{ instanceType: nanoT2, machineImage: ec2.MachineImage.latestAmazonLinux(), vpc: vpc, instanceName: "test-instance" }) instance.connections.allowFrom( ec2.Peer.ipv4("192.0.2.0/24"), ec2.Port.tcp(22), /*description*/ "Allows SSH from a trusted range" ) For aws-cdk-lib.aws_ec2.SecurityGroup import {aws_ec2 as ec2} from 'aws-cdk-lib' const securityGroup3 = new ec2.SecurityGroup(this, "custom-security-group", { vpc: vpc }) securityGroup3.addIngressRule( ec2.Peer.anyIpv4(), ec2.Port.tcpRange(1024, 1048) ) For aws-cdk-lib.aws_ec2.CfnSecurityGroup import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnSecurityGroup( this, "cfn-based-security-group", { groupDescription: "cfn based security group", groupName: "cfn-based-security-group", vpcId: vpc.vpcId, securityGroupIngress: [ { ipProtocol: "6", cidrIp: "192.0.2.0/24", fromPort: 22, toPort: 22 } ] } ) For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress new ec2.CfnSecurityGroupIngress( this, "ingress-all-ipv4-tcp-http", { ipProtocol: "6", cidrIp: "0.0.0.0/0", fromPort: 80, toPort: 80, groupId: securityGroup.attrGroupId } ) ResourcesDocumentation
Standards |
||||||||||||
javascript:S6327 |
Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary. Sensitive Code Exampleimport { Topic } from 'aws-cdk-lib/aws-sns'; new Topic(this, 'exampleTopic'); // Sensitive import { Topic, CfnTopic } from 'aws-cdk-lib/aws-sns'; new CfnTopic(this, 'exampleCfnTopic'); // Sensitive Compliant Solutionimport { Topic } from 'aws-cdk-lib/aws-sns'; const encryptionKey = new Key(this, 'exampleKey', { enableKeyRotation: true, }); new Topic(this, 'exampleTopic', { masterKey: encryptionKey }); import { CfnTopic } from 'aws-cdk-lib/aws-sns'; const encryptionKey = new Key(this, 'exampleKey', { enableKeyRotation: true, }); cfnTopic = new CfnTopic(this, 'exampleCfnTopic', { kmsMasterKeyId: encryptionKey.keyId }); See |
||||||||||||
javascript:S6329 |
Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption. Depending on the component, inbound access from the Internet can be enabled via:
Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident. This decision increases the likelihood of attacks on the organization, such as:
Ask Yourself WhetherThis cloud resource:
There is a risk if you answered no to any of those questions. Recommended Secure Coding PracticesAvoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites. Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components. The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address. Sensitive Code ExampleFor aws-cdk-lib.aws_ec2.Instance and similar constructs: import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.Instance(this, "example", { instanceType: nanoT2, machineImage: ec2.MachineImage.latestAmazonLinux(), vpc: vpc, vpcSubnets: {subnetType: ec2.SubnetType.PUBLIC} // Sensitive }) For aws-cdk-lib.aws_ec2.CfnInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnInstance(this, "example", { instanceType: "t2.micro", imageId: "ami-0ea0f26a6d50850c5", networkInterfaces: [ { deviceIndex: "0", associatePublicIpAddress: true, // Sensitive deleteOnTermination: true, subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PUBLIC}).subnetIds[0] } ] }) For aws-cdk-lib.aws_dms.CfnReplicationInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' new dms.CfnReplicationInstance( this, "example", { replicationInstanceClass: "dms.t2.micro", allocatedStorage: 5, publiclyAccessible: true, // Sensitive replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier, vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup] }) For aws-cdk-lib.aws_rds.CfnDBInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' const rdsSubnetGroupPublic = new rds.CfnDBSubnetGroup(this, "publicSubnet", { dbSubnetGroupDescription: "Subnets", dbSubnetGroupName: "publicSn", subnetIds: vpc.selectSubnets({ subnetType: ec2.SubnetType.PUBLIC }).subnetIds }) new rds.CfnDBInstance(this, "example", { engine: "postgres", masterUsername: "foobar", masterUserPassword: "12345678", dbInstanceClass: "db.r5.large", allocatedStorage: "200", iops: 1000, dbSubnetGroupName: rdsSubnetGroupPublic.ref, publiclyAccessible: true, // Sensitive vpcSecurityGroups: [sg.securityGroupId] }) Compliant SolutionFor aws-cdk-lib.aws_ec2.Instance and similar constructs: import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.Instance( this, "example", { instanceType: nanoT2, machineImage: ec2.MachineImage.latestAmazonLinux(), vpc: vpc, vpcSubnets: {subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS} }) For aws-cdk-lib.aws_ec2.CfnInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnInstance(this, "example", { instanceType: "t2.micro", imageId: "ami-0ea0f26a6d50850c5", networkInterfaces: [ { deviceIndex: "0", associatePublicIpAddress: false, deleteOnTermination: true, subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}).subnetIds[0] } ] }) For aws-cdk-lib.aws_dms.CfnReplicationInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' new dms.CfnReplicationInstance( this, "example", { replicationInstanceClass: "dms.t2.micro", allocatedStorage: 5, publiclyAccessible: false, replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier, vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup] }) For aws-cdk-lib.aws_rds.CfnDBInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' const rdsSubnetGroupPrivate = new rds.CfnDBSubnetGroup(this, "example",{ dbSubnetGroupDescription: "Subnets", dbSubnetGroupName: "privateSn", subnetIds: vpc.selectSubnets({ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }).subnetIds }) new rds.CfnDBInstance(this, "example", { engine: "postgres", masterUsername: "foobar", masterUserPassword: "12345678", dbInstanceClass: "db.r5.large", allocatedStorage: "200", iops: 1000, dbSubnetGroupName: rdsSubnetGroupPrivate.ref, publiclyAccessible: false, vpcSecurityGroups: [sg.securityGroupId] }) See
|
||||||||||||
javascript:S6333 |
Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure. Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIn general, prefer limiting API access to a specific set of people or entities. AWS provides multiple methods to do so:
Sensitive Code ExampleFor aws-cdk-lib.aws_apigateway.Resource: import {aws_apigateway as apigateway} from "aws-cdk-lib" const resource = api.root.addResource("example") resource.addMethod( "GET", new apigateway.HttpIntegration("https://example.org"), { authorizationType: apigateway.AuthorizationType.NONE // Sensitive } ) For aws-cdk-lib.aws_apigatewayv2.CfnRoute: import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib" new apigateway.CfnRoute(this, "no-auth", { apiId: api.ref, routeKey: "GET /no-auth", authorizationType: "NONE", // Sensitive target: exampleIntegration }) Compliant SolutionFor aws-cdk-lib.aws_apigateway.Resource: import {aws_apigateway as apigateway} from "aws-cdk-lib" const resource = api.root.addResource("example",{ defaultMethodOptions:{ authorizationType: apigateway.AuthorizationType.IAM } }) resource.addMethod( "POST", new apigateway.HttpIntegration("https://example.org"), { authorizationType: apigateway.AuthorizationType.IAM } ) resource.addMethod( // authorizationType is inherited from the Resource's configured defaultMethodOptions "GET" ) For aws-cdk-lib.aws_apigatewayv2.CfnRoute: import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib" new apigateway.CfnRoute(this, "auth", { apiId: api.ref, routeKey: "POST /auth", authorizationType: "AWS_IAM", target: exampleIntegration }) See
|
||||||||||||
javascript:S2092 |
When a cookie is protected with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplecookie-session module: let session = cookieSession({ secure: false,// Sensitive }); // Sensitive express-session module: const express = require('express'); const session = require('express-session'); let app = express(); app.use(session({ cookie: { secure: false // Sensitive } })); cookies module: let cookies = new Cookies(req, res, { keys: keys }); cookies.set('LastVisit', new Date().toISOString(), { secure: false // Sensitive }); // Sensitive csurf module: const cookieParser = require('cookie-parser'); const csrf = require('csurf'); const express = require('express'); let csrfProtection = csrf({ cookie: { secure: false }}); // Sensitive Compliant Solutioncookie-session module: let session = cookieSession({ secure: true,// Compliant }); // Compliant express-session module: const express = require('express'); const session = require('express-session'); let app = express(); app.use(session({ cookie: { secure: true // Compliant } })); cookies module: let cookies = new Cookies(req, res, { keys: keys }); cookies.set('LastVisit', new Date().toISOString(), { secure: true // Compliant }); // Compliant csurf module: const cookieParser = require('cookie-parser'); const csrf = require('csurf'); const express = require('express'); let csrfProtection = csrf({ cookie: { secure: true }}); // Compliant See
|
||||||||||||
javascript:S5122 |
Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities: Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplenodejs http built-in module: const http = require('http'); const srv = http.createServer((req, res) => { res.writeHead(200, { 'Access-Control-Allow-Origin': '*' }); // Sensitive res.end('ok'); }); srv.listen(3000); Express.js framework with cors middleware: const cors = require('cors'); let app1 = express(); app1.use(cors()); // Sensitive: by default origin is set to * let corsOptions = { origin: '*' // Sensitive }; let app2 = express(); app2.use(cors(corsOptions)); User-controlled origin: function (req, res) { const origin = req.header('Origin'); res.setHeader('Access-Control-Allow-Origin', origin); // Sensitive }; Compliant Solutionnodejs http built-in module: const http = require('http'); const srv = http.createServer((req, res) => { res.writeHead(200, { 'Access-Control-Allow-Origin': 'trustedwebsite.com' }); // Compliant res.end('ok'); }); srv.listen(3000); Express.js framework with cors middleware: const cors = require('cors'); let corsOptions = { origin: 'trustedwebsite.com' // Compliant }; let app = express(); app.use(cors(corsOptions)); User-controlled origin validated with an allow-list: function (req, res) { const origin = req.header('Origin'); if (trustedOrigins.indexOf(origin) >= 0) { res.setHeader('Access-Control-Allow-Origin', origin); } }; See
|
||||||||||||
javascript:S5247 |
To reduce the risk of cross-site scripting attacks, templating systems, such as Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy
(which only transforms html characters into html entities) will not be relevant
when variables are used in a html attribute because ' <a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie) <a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack) Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesEnable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one. Sensitive Code Examplemustache.js template engine: let Mustache = require("mustache"); Mustache.escape = function(text) {return text;}; // Sensitive let rendered = Mustache.render(template, { name: inputName }); handlebars.js template engine: const Handlebars = require('handlebars'); let source = "<p>attack {{name}}</p>"; let template = Handlebars.compile(source, { noEscape: true }); // Sensitive markdown-it markup language parser: const markdownIt = require('markdown-it'); let md = markdownIt({ html: true // Sensitive }); let result = md.render('# <b>attack</b>'); marked markup language parser: const marked = require('marked'); marked.setOptions({ renderer: new marked.Renderer(), sanitize: false // Sensitive }); console.log(marked("# test <b>attack/b>")); kramed markup language parser: let kramed = require('kramed'); var options = { renderer: new kramed.Renderer({ sanitize: false // Sensitive }) }; Compliant Solutionmustache.js template engine: let Mustache = require("mustache"); let rendered = Mustache.render(template, { name: inputName }); // Compliant autoescaping is on by default handlebars.js template engine: const Handlebars = require('handlebars'); let source = "<p>attack {{name}}</p>"; let data = { "name": "<b>Alan</b>" }; let template = Handlebars.compile(source); // Compliant by default noEscape is set to false markdown-it markup language parser: let md = require('markdown-it')(); // Compliant by default html is set to false let result = md.render('# <b>attack</b>'); marked markup language parser: const marked = require('marked'); marked.setOptions({ renderer: new marked.Renderer() }); // Compliant by default sanitize is set to true console.log(marked("# test <b>attack/b>")); kramed markup language parser: let kramed = require('kramed'); let options = { renderer: new kramed.Renderer({ sanitize: true // Compliant }) }; console.log(kramed('Attack [xss?](javascript:alert("xss")).', options)); See
|
||||||||||||
javascript:S6330 |
Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary. Sensitive Code Exampleimport { Queue } from 'aws-cdk-lib/aws-sqs'; new Queue(this, 'example'); // Sensitive For import { CfnQueue } from 'aws-cdk-lib/aws-sqs'; new CfnQueue(this, 'example'); // Sensitive Compliant Solutionimport { Queue } from 'aws-cdk-lib/aws-sqs'; new Queue(this, 'example', { encryption: QueueEncryption.KMS_MANAGED }); For import { CfnQueue } from 'aws-cdk-lib/aws-sqs'; const encryptionKey = new Key(this, 'example', { enableKeyRotation: true, }); new CfnQueue(this, 'example', { kmsMasterKeyId: encryptionKey.keyId }); See
|
||||||||||||
javascript:S6332 |
Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary. Sensitive Code ExampleFor import { FileSystem } from 'aws-cdk-lib/aws-efs'; new FileSystem(this, 'unencrypted-explicit', { vpc: new Vpc(this, 'VPC'), encrypted: false // Sensitive }); For import { CfnFileSystem } from 'aws-cdk-lib/aws-efs'; new CfnFileSystem(this, 'unencrypted-implicit-cfn', { }); // Sensitive as encryption is disabled by default Compliant SolutionFor import { FileSystem } from 'aws-cdk-lib/aws-efs'; new FileSystem(this, 'encrypted-explicit', { vpc: new Vpc(this, 'VPC'), encrypted: true }); For import { CfnFileSystem } from 'aws-cdk-lib/aws-efs'; new CfnFileSystem(this, 'encrypted-explicit-cfn', { encrypted: true }); See
|
||||||||||||
secrets:S6700 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?A RapidAPI key is a unique identifier that allows you to access and use APIs provided by RapidAPI. This key is used to track your API usage, manage your subscriptions, and ensure that you have the necessary permissions to access the APIs you are using. One RapidAPI key can be used to authenticate against a set of multiple other third-party services, depending on the key entitlement. If a RapidAPI key leaks to an unintended audience, it can have several potential consequences. Especially, attackers may use the leaked key to access and utilize the APIs associated with that key without permission. This can result in unauthorized usage of API services, potentially leading to misuse, abuse, or excessive consumption of resources. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. RapidAPI services include an audit trail feature that can be used to audit malicious use of the compromised key. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("rapidapi_key", "6f1bbe24b9mshcbb5030202794a4p18f7d0jsndd55ab0f981d") // Noncompliant Compliant solutionprops.set("rapidapi_key", System.getenv("rapidapi_key")) ResourcesStandards
Documentation
|
||||||||||||
secrets:S6701 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Telegram bot keys are used to authenticate and authorize a bot to interact with the Telegram Bot API. These keys are essentially access tokens that allow the bot to send and receive messages, manage groups and channels, and perform other actions on behalf of the bot. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. Phishing and spamAn attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("api_token", "7299363101:AAWJlilLyeMaKgTTrrfsyrtxDqqI-cdI-TF") // Noncompliant Compliant solutionprops.set("api_token", System.getenv("API_TOKEN")) ResourcesStandards |
||||||||||||
secrets:S6702 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?A SonarQube token is a unique key that serves as an authentication mechanism for accessing the SonarQube platform’s APIs. It is used to securely authenticate and authorize external tools or services to interact with SonarQube. Tokens are typically generated for specific users or applications and can be configured with different levels of access permissions. By using a token, external tools or services can perform actions such as analyzing code, retrieving analysis results, creating projects, or managing quality profiles within SonarQube. If a SonarQube token leaks to an unintended audience, it can pose a security risk to the SonarQube instance and the associated projects. Attackers may use the leaked token to gain unauthorized access to the SonarQube instance. They can potentially view sensitive information, modify project settings, or perform other dangerous actions. Additionally, attackers with access to a token can modify code analysis results. This can lead to false positives or negatives in the analysis, compromising the accuracy and reliability of the platform. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. The SonarQube audit log can be downloaded from the product web interface and can be used to audit the malicious use of the compromised key. This feature is available starting with SonarQube Enterprise Edition. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("sonar_secret", "squ_b4556a16fa2d28519d2451a911d2e073024010bc") // Noncompliant Compliant solutionprops.set("sonar_secret", System.getenv("SONAR_SECRET")) ResourcesStandards
Documentation
|
||||||||||||
secrets:S6703 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Passwords are often used to authenticate users against database engines. They are associated with user accounts that are granted specific permissions over the database and its hosted data. If a database password leaks to an unintended audience, it can have serious consequences for the security of your database instance, the data stored within it, and the applications that rely on it. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Security downgradeApplications relying on a database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise. For example, if the database instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Never hard-code secrets, not even the default values It is important that you do not hard-code secrets, even default values. First, hard-coded default secrets are often short and can be easily compromised even by attackers who do not have access to the code base. Second, hard-coded default secrets can cause problems if they need to be changed or replaced. And most importantly, there is always the possibility to accidentally set default secrets for production services, which can lead to security vulnerabilities and make production insecure by default. To minimize these risks, it is recommended to apply the above strategies, even for the default settings. Code examplesNoncompliant code examplepublic static string ConnectionString = "server=database-server;uid=user;pwd=P@ssw0rd;database=ProductionData"; Compliant solutionpublic static string ConnectionString = String.format( "server=database-server;uid=user;pwd=%s;database=ProductionData", System.getenv("DB_PASSWORD") ) ResourcesStandards |
||||||||||||
secrets:S6704 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Riot API keys are used to access the Riot Games API, which provides developers with programmatic access to various data and services related to Riot Games' products, such as League of Legends. These API keys are used to authenticate and authorize requests made to the API, allowing developers to retrieve game data, player statistics, match history, and other related information. If a Riot API key is leaked to an unintended audience, it can have significant consequences. One of the main risks is unauthorized access. The unintended audience may exploit the leaked API key to gain entry to the Riot Games API. This can result in the unauthorized retrieval of sensitive data and misuse of services provided by the API. It poses a serious security threat as it allows individuals to access information that they should not have access to, potentially compromising the privacy and integrity of the data. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("api_key", "RGAPI-924549e3-31a9-406e-9e92-25ed41206dce") // Noncompliant Compliant solutionprops.set("api_key", System.getenv("API_KEY")) ResourcesStandards |
||||||||||||
secrets:S6705 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?An OpenWeather API key is a unique identifier that allows you to access the OpenWeatherMap API. The OpenWeatherMap API provides weather data and forecasts for various locations worldwide. If an OpenWeather API key leaks to an unintended audience, it can have several security consequences. Attackers may use the leaked API key to access the OpenWeatherMap API and consume the weather data without proper authorization. This can lead to excessive usage, potentially exceeding the API rate limits, or violating the terms of service. Moreover, depending on the pricing model of the corresponding OpenWeather account, unauthorized usage of the leaked API key can result in unexpected charges or increased costs. Attackers may consume a large amount of data or make excessive requests, leading to additional expenses for the API key owner. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleurl = "http://api.openweathermap.org/data/2.5/weather?units=imperial&appid=ae73acab47d0fc4b71b634d943b00518&q=" Compliant solutionimport os token = os.environ["OW_TOKEN"] uri = "http://api.openweathermap.org/data/2.5/weather?units=imperial&appid={token}&q=" ResourcesStandards
DocumentationOpenWeather Documentation - API keys |
||||||||||||
secrets:S6706 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?A cryptographic private key is a piece of sensitive information that is used in asymmetric cryptography. They are used in conjunction with public keys to secure communications and authenticate digital signatures. Private keys can be used to achieve two main cryptographic operations, encryption or digital signature. Those operations are the basis of multiple higher-level security mechanisms such as:
Disclosing a cryptographic private key to an unintended audience can have severe security consequences. The exact impact will vary depending on the role of the key and the assets it protects. For example, if the key is used in conjunction with an X509 certificate to authenticate a web server as part of TLS communications, attackers will be able to impersonate that server. This leads to Man-In-The-Middle-Attacks that would affect both the confidentiality and integrity of the communications from clients to that server. If the key was used as part of e-mail protocols, attackers might be able to send e-mails on behalf of the key owner or decrypt previously encrypted emails. This might lead to sensitive information disclosure and reputation loss. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. In most cases, if the key is used as part of a larger trust model (X509, PGP, etc), it is necessary to issue and publish a revocation certificate. Doing so will ensure that all people and assets that rely on this key for security operations are aware of its compromise and stop trusting it. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprivate_key = "-----BEGIN EC PRIVATE KEY-----" \ "MF8CAQEEGEfVxjrMPigNhGP6DqH6DPeUZPbaoaCCXaAKBggqhkjOPQMBAaE0AzIA" \ "BCIxho34upZyXDi/AUy/TBisGeh4yKJN7pit9Z+nKs4QajVy97X8W9JdySlbWeRt" \ "2w==" \ "-----END EC PRIVATE KEY-----" Compliant solutionwith open("/path/to/private.key","r") as key_file: private_key = key_file.read() ResourcesStandards |
||||||||||||
secrets:S6708 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?The Discord webhook URL grants access to a channel in your server, represented by a bot. A plethora of permissions can be specified in the
Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. Phishing and spamAn attacker can use this webhook to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("discord_webhook_url", "https://discord.com/api/webhooks/1143503308481384529/SBkGFYyl6njbyg_DJwhP2x5s4XAzd8Ll5CZQ7HG4xfDRJhOTAIlb0UiPL4ykOZQNIHpd") // Noncompliant Compliant solutionprops.set("discord_webhook_url", System.getenv("DISCORD_WEBHOOK_URL")) ResourcesStandards |
||||||||||||
secrets:S6755 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. If an attacker gains access to a DigitalOcean personal access token or OAuth token, they might be able to compromise your DigitalOcean environment. This includes control over Droplets and any applications that are running, as well as databases and other assets that are managed by the account. What is the potential impact?If an attacker manages to gain access to the DigitalOcean environment, there exist several ways that they could seriously harm your organization. Any data that is stored in the environment could be leaked, but the environment itself could even be tampered with. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Infrastructure takeoverBy obtaining a leaked secret, an attacker can gain control over your organization’s DigitalOcean infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining. This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions. Furthermore, corporate DigitalOcean infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data and to cause more damage to the overall infrastructure. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code examplerequire 'droplet_kit' token = 'dop_v1_1adc4095c3c676ff1c31789a1a86480195a5b3d955010c94fcfa554b34640e1e' # Noncompliant client = DropletKit::Client.new(access_token: token) Compliant solutionrequire 'droplet_kit' token = ENV['DIGITALOCEAN_TOKEN'] client = DropletKit::Client.new(access_token: token) ResourcesDocumentationDigitalOcean Documentation - How to Create a Personal Access Token Standards |
||||||||||||
secrets:S6758 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. If an attacker gains access to an NPM access token, they might be able to gain access to any private package linked to this token. What is the potential impact?The exact impact of the compromise of an NPM access token varies depending on the permissions granted to this token. It can range from loss of sensitive data and source code to severe supply chain attacks. Compromise of sensitive source codeThe affected service is used to store private packages and repositories. If a token is leaked, it can be used by unauthorized individuals to gain access to your sensitive code, proprietary libraries, and other confidential resources. This can lead to intellectual property theft, unauthorized modifications, or even sabotage of your software. If these private packages contain other secrets, it might even lead to further breaches in the organization’s services. Supply chain attacksIf the leaked secret gives an attacker the ability to publish code to private packages or repositories under the name of the organization, then there may exist grave consequences beyond the compromise of source code. The attacker may inject malware, backdoors, or other harmful code into these private repositories. This can cause further security breaches inside the organization, but will also affect clients if the malicious code gets added to any products. Distributing code that (unintentionally) contains backdoors or malware can lead to widespread security vulnerabilities, reputational damage, and potential legal liabilities. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code examplesteps: - run: | npm install - env: NPM_TOKEN: npm_tCEMceczuiTXKQaBjGIaAezYQ63PqI972ANG Compliant solutionsteps: - run: | npm install - env: NPM_TOKEN: ${{ secrets.NPM_TOKEN }} Going the extra mileReducing the permission scope per secretBy reducing the permission scope, the token is granted only the minimum set of permissions required to perform its intended tasks. This follows the principle of least privilege, which states that a user or token should have only the necessary privileges to carry out its specific functions. By adhering to this principle, the potential attack surface is minimized, reducing the risk of unauthorized access or misuse of sensitive resources. Additionally, if a token is compromised, the reduced permissions scope limits the potential damage that can be done. With fewer permissions, the attacker’s ability to access or modify critical resources is restricted, reducing the impact of the compromise. ResourcesDocumentationnpm Docs - Revoking access tokens Standards |
||||||||||||
secrets:S6782 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. When an attacker gains access to a Docker Hub personal access token and the username of the account, they can gain access to all container images accessible to that account. What is the potential impact?In case of a leaked token, an attacker can read out all private images and are also able to store new malicious images in the registry. This can have multiple severe consequences. Compromise of sensitive source codeDocker Hub is often used to store private container images. If a personal access token is leaked, it can be used by unauthorized individuals to gain access to these images. Not only does this allow a malicious person to access and use internal projects, but it can also enable them to leak sensitive source code, proprietary binaries, and other confidential resources belonging to these projects. This can lead to intellectual property theft, unauthorized modifications, or even sabotage of your software. If these private images contain other secrets, it might even lead to further breaches in the organization’s services. Supply chain attacksIf the leaked secret gives an attacker the ability to publish code to private packages or repositories under the name of the organization, then there may exist grave consequences beyond the compromise of source code. The attacker may inject malware, backdoors, or other harmful code into these private repositories. This can cause further security breaches inside the organization, but will also affect clients if the malicious code gets added to any products. Distributing code that (unintentionally) contains backdoors or malware can lead to widespread security vulnerabilities, reputational damage, and potential legal liabilities. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code examplesteps: - name: Login to DockerHub uses: docker/login-action@v2 with: username: mobythewhale password: dckr_pat_cq7wQZcv9xZkVlxMhDTcTV00CDo Compliant solutionsteps: - name: Login to DockerHub uses: docker/login-action@v2 with: username: ${{ secrets.dockerUsername }} password: ${{ secrets.dockerAccessToken }} ResourcesDocumentationDocker docs - Create and manage access tokens Standards |
||||||||||||
secrets:S6783 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Disclosure of blockchain dataThe leaked key can be used to query APIs of blockchain services and access sensitive information stored in the service metadata. This may include
user identities and other sensitive data. Breach of trust in non-repudiation and disruption of the audit trailWhen such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity. All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications. Financial lossSince this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or
the account is tampered with. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("infura_api_key", "https://mainnet.infura.io/v3/f6fc4aa25abb16e901876269d01f2ec5") // Noncompliant Compliant solutionprops.set("infura_api_key", System.getenv("INFURA_API_KEY")) ResourcesStandards |
||||||||||||
secrets:S6910 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Postmark server tokens are used to authenticate requests to the Postmark API. When a request is made to the Postmark API, the server token is included in the header of the request. This process enables Postmark to confirm that the request originates from a trusted source and should be processed accordingly. These tokens are sensitive because they provide full access to all features and data on a specific server in Postmark. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this token to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Account terminationUnauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure. The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("X-Postmark-Server-Token", "89d36b44-4c54-4623-91d9-b61f29b702f8") // Noncompliant Compliant solutionprops.set("X-Postmark-Server-Token", System.getenv("POSTMARK_SERVER_TOKEN")) ResourcesStandards |
||||||||||||
secrets:S6686 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?If a Clarifai API key leaks to an unintended audience, it could potentially lead to unauthorized access to the Clarifai account and its associated data. This could result in the compromise of sensitive data or financial loss. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code examplefrom clarifai_grpc.grpc.api.status import status_code_pb2 metadata = (('authorization','Key d819f799b90bc8dbaffd83661782dbb7'),) Compliant solutionimport os from clarifai_grpc.grpc.api.status import status_code_pb2 metadata = (('authorization',os.environ["CLARIFAI_API_KEY"]),) ResourcesStandards |
||||||||||||
secrets:S6689 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?GitHub tokens are used for authentication and authorization purposes when interacting with the GitHub API. They serve as a way to identify and authenticate users or applications that are making requests to the GitHub API. The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. Application’s security downgradeA downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component. For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("token", "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") // Noncompliant Compliant solutionprops.set("token", System.getenv("TOKEN")) ResourcesStandards
DocumentationGitHub documentation - Managing your personal access tokens |
||||||||||||
secrets:S6710 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?A FCM API key leak is particularly severe if the affected key has administrative privileges: The range of topics to which an attacker can subscribe and send messages is much larger than with normal privileges. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this {secret_type} to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Chaining of vulnerabilitiesTriggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise. Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("fcm_key", "cfUDlZL9YBQ:APA91bJxU9oMf3RbiyqnmUO60KU_JLawjf2yrTfSs3_ZAp3dxZS0J88G5P5AoKWoviAdUK5i-2SB7iHcb4Wd38EMsZXBAAb6GZMaSOeKfaI0DuLxAFTOgGNKRSmj2R9gIQyzpjoThmqe") // Noncompliant Compliant solutionprops.set("fcm_key", System.getenv("FCM_KEY")) ResourcesStandards |
||||||||||||
secrets:S6713 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Slack Incoming Webhook URLs have write-only access to a channel: They can only post messages. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this webhook to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("slack_webhook_url", "https://hooks.slack.com/services/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") // Noncompliant Compliant solutionprops.set("slack_webhook_url", System.getenv("SLACK_WEBHOOK_URL")) ResourcesStandards |
||||||||||||
secrets:S6717 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Slack Workflow webhook URLs have different effects depending on their permissions: They can be used only to write Slack posts or to trigger other workflows. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this webhook to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Chaining of vulnerabilitiesTriggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise. Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("slack_webhook_url", "https://hooks.slack.com/workflows/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") // Noncompliant Compliant solutionprops.set("slack_webhook_url", System.getenv("SLACK_WEBHOOK_URL")) ResourcesStandards |
||||||||||||
secrets:S6718 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Stripe endpoint secrets allow webhooks to verify that requests to a user-owned webhook really originated from Stripe. This data can be used to transmit thousands of different types of sensitive events. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Breach of trust in non-repudiation and disruption of the audit trailWhen such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity. All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications. Financial lossSince this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or
the account is tampered with. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("stripe_endpoint_secret", "whsec_3cAgzYnf0seUtVzSAP08cH9nDICqwI1T") // Noncompliant Compliant solutionprops.set("stripe_endpoint_secret", System.getenv("STRIPE_ENDPOINT_SECRET")) ResourcesStandards |
||||||||||||
secrets:S6719 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Disclosure of blockchain dataThe leaked key can be used to query APIs of blockchain services and access sensitive information stored in the service metadata. This may include
user identities and other sensitive data. Breach of trust in non-repudiation and disruption of the audit trailWhen such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity. All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications. Financial lossSince this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or
the account is tampered with. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("alchemy_eth_api_key", "https://eth-mainnet.alchemyapi.io/v2/sAwFYc32ctGA_VSdesa72bheDxfGWRWl") // Noncompliant Compliant solutionprops.set("alchemy_eth_api_key", System.getenv("ALCHEMY_ETH_API_KEY")) ResourcesStandards |
||||||||||||
secrets:S6722 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?PlanetScale Database passwords are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Application’s security downgradeA downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component. For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("planetscale_password", "pscale_pw_hatgoG_EprhgnblWotaJGbeOeFE7q9BwW0_g5ML486D") // Noncompliant Compliant solutionprops.set("planetscale_password", System.getenv("PLANETSCALE_PASSWORD")) ResourcesStandards |
||||||||||||
secrets:S6723 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Mailgun API keys provide complete control over the Mailgun account and allow sending bulk emails. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. Phishing and spamAn attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Account terminationUnauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure. The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("mailgun_key", "key-9392bf4edd483c111748f422750442fe") // Noncompliant Compliant solutionprops.set("mailgun_key", System.getenv("MAILGUN_KEY")) ResourcesStandards |
||||||||||||
secrets:S6751 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?The exact consequences of a PyPI API token compromise can vary depending on the scope of the affected token. Depending on this factor, the attacker might get access to the full account the token is bound to or only to a project belonging to that user. In any case, such a compromise can lead to source code leaks, data leaks and even serious supply chain attacks. In general, a reputational loss is also a common threat. Compromise of sensitive source codeThe affected service is used to store private packages and repositories. If a token is leaked, it can be used by unauthorized individuals to gain access to your sensitive code, proprietary libraries, and other confidential resources. This can lead to intellectual property theft, unauthorized modifications, or even sabotage of your software. If these private packages contain other secrets, it might even lead to further breaches in the organization’s services. Supply chain attacksIf the leaked secret gives an attacker the ability to publish code to private packages or repositories under the name of the organization, then there may exist grave consequences beyond the compromise of source code. The attacker may inject malware, backdoors, or other harmful code into these private repositories. This can cause further security breaches inside the organization, but will also affect clients if the malicious code gets added to any products. Distributing code that (unintentionally) contains backdoors or malware can lead to widespread security vulnerabilities, reputational damage, and potential legal liabilities. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. For PyPI, Code examplesNoncompliant code examplePyPI API tokens can be used to authenticate with PyPI by setting the token as a password in [pypi] username = __token__ password = pypi-YBf3ZAIKOMPwNZ1VaQ0RAtjww5lI1az1CMLEOWgDQN56EPADfzRmgsENVcmIUh2mSBwYlTtyNKGmVlLm2MZD2aJOTWmD2EO5PMyWjvUY3Ii2CjsidALCNCNmvX8N8gcijBliFN2ciBCLgQdi2YYfGjA1kz19z1UBKg Compliant solutionInstead, Python’s pip config set --global global.keyring-provider subprocess Going the extra mileReducing the permission scope per secretBy reducing the permission scope, the token is granted only the minimum set of permissions required to perform its intended tasks. This follows the principle of least privilege, which states that a user or token should have only the necessary privileges to carry out its specific functions. By adhering to this principle, the potential attack surface is minimized, reducing the risk of unauthorized access or misuse of sensitive resources. Additionally, if a token is compromised, the reduced permissions scope limits the potential damage that can be done. With fewer permissions, the attacker’s ability to access or modify critical resources is restricted, reducing the impact of the compromise. ResourcesDocumentation
Standards |
||||||||||||
secrets:S6752 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. Attackers with access to an Artifactory API key will be able to use this API with all the permissions the corresponding user has been granted with. What is the potential impact?The consequences vary depending on the compromised account entitlement but can range from proprietary information leaks to severe supply chain attacks. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. In the case of Artifactory repositories, if they contain private code or software, attackers will be able to steal those. They could use this software for their own use, to look for further exploitable vulnerability, or disclose it publicly, with or without asking for a ransom. Supply chain attacksIf the leaked secret gives an attacker the ability to publish code to private packages or repositories under the name of the organization, then there may exist grave consequences beyond the compromise of source code. The attacker may inject malware, backdoors, or other harmful code into these private repositories. This can cause further security breaches inside the organization, but will also affect clients if the malicious code gets added to any products. Distributing code that (unintentionally) contains backdoors or malware can lead to widespread security vulnerabilities, reputational damage, and potential legal liabilities. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("artifactory_token", "AKCp8vLnDPZeVA29WylUNdaT54Pg2E9rx8gJWfbPCw2Wsb0UCAEmimIPFscGbJPYEUhXVBCRQ") // Noncompliant Compliant solutionprops.set("artifactory_token", System.getenv("ARTIFACTORY_TOKEN")) ResourcesStandards |
||||||||||||
secrets:S6753 |
Zuplo is an API management platform built for developers. It handles authentification and access to your API and provides additional functionalities such as rate limiting the number of requests to your backend. In order for your backend to validate that a request has been processed by Zuplo, it relies on an API key generated in Zuplo Developer Portal. If this key is compromised, attackers will be able to bypass Zuplo and access your API without authentication and authorization. Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?The exact impact of a Zuplo API key being leaked varies greatly depending on the type of services the software is used to implement. In general, consequences ranging from a denial of service to application compromise can be expected. Chaining of vulnerabilitiesTriggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise. Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("zapi_key", "zpka_213d294a9a5a44619cd6a02e55a20417_5f43e4d0") // Noncompliant Compliant solutionprops.set("zapi_key", System.getenv("ZAPI_KEY")) ResourcesDocumentation
Standards |
||||||||||||
secrets:S6762 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. If an attacker gains access to a Grafana personal access token or Granafa Cloud token, they might be able to compromise the Grafana environment linked to this token. By doing so, it might be possible for business-critical data to be leaked by the attacker. What is the potential impact?Depending on the permissions given to the secret, the impact might range from the compromise of the data of some dashboards to a full takeover of the Grafana environment. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Application takeoverWith control over the Grafana application, the attacker can modify dashboards, alter data sources, or inject malicious code. This can result in the manipulation of displayed data, misleading visualizations, or even the introduction of backdoors for further exploitation. The attacker may even attempt to escalate their privileges within the Grafana environment. By gaining administrative access or higher-level permissions, they can perform more significant actions, such as modifying access controls, adding or deleting users, or changing system configurations. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleimport requests token = 'glsa_geygSnIfuK5vBG0KgaflRCQfIb8mzaM7_b0999d91' # Noncompliant response = requests.get('https://grafana.example.org/api/dashboards/home', headers={ 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' }) Compliant solutionimport requests token = os.getenv('GRAFANA_SERVICE_ACCOUNT_TOKEN') response = requests.get('https://grafana.example.org/api/dashboards/home', headers={ 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' }) ResourcesDocumentationGrafana Documentation - Service Accounts Standards |
||||||||||||
secrets:S6768 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. If an attacker gains access to a Typeform personal access token, they might be able to compromise the data that is accessible to the linked Typeform account. By doing so, it might be possible for customer data to be leaked by the attacker. What is the potential impact?If an attacker gains access to forms and the data linked to the forms, your organization may be impacted in several ways. Data compromiseTypeform often is used to store private information that users have shared through their forms. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive personal information. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. Phishing attacksAn attacker can use the Typeform access token to lure them into links to a malicious domain controlled by the attacker. They can use the data stored in the forms to create attacks that look legitimate to the victims. In some cases, they might even edit existing forms to lead users to a malicious domain directly. Once a user has been phished on a legitimate-seeming third-party website, the attacker can trick users into submitting sensitive information, such as login credentials or financial details. This can lead to identity theft, financial fraud, or unauthorized access to other systems. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleimport requests token = 'tfp_DEueEgDipkmx52r7rgU5EC7VC5K2MzzsR61ELEkqmh3Y_3mJqwKJ2vtfX5N' # Noncompliant response = requests.get('https://api.typeform.com/forms', headers={ 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' }) Compliant solutionimport requests token = os.getenv('TYPEFORM_PERSONAL_ACCESS_TOKEN') response = requests.get('https://api.typeform.com/forms', headers={ 'Authorization': f'Bearer {token}', 'Content-Type': 'application/json' }) ResourcesDocumentationTypeform Developers - Regenerate your personal access token Standards |
||||||||||||
secrets:S6769 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. If an attacker gains access to a Shopify app token or a Shopify Partners token, they might be able to compromise the Shopify environment linked to this token. As this environment typically contains both important financial data and the personal information of clients, a breach by a malicious entity could have a serious impact on the organization. What is the potential impact?Shopify contains both important information about customers, as well as financial information in general. If an attacker manages to get access to either of those through a leaked secret, they could severely impact the business in multiple ways. Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. Furthermore, the personal identifiable information contained by the Shopify platform could be used for phishing. Not sufficiently protecting the sensitive information of clients, such as addresses, email addresses and even financial information, can directly hurt these clients and will also hurt the reputation of the organization. Disclosure of financial dataWhen an attacker gains access to an organization’s financial information, it can have severe consequences for the organization. One of the primary concerns is the potential leakage of sensitive financial data. This information may include bank account details, credit card information, or confidential financial reports. If this data falls into the wrong hands, it can be used for malicious purposes such as identity theft, unauthorized access to financial accounts, or even blackmail. The disclosure of financial information can also lead to a loss of confidence and damage the organization’s reputation with its stakeholders. Customers, partners, and investors place trust in organizations to protect their financial data. In case of a breach, customers may be hesitant to continue doing business with this company, leading to a loss of revenue and market share. Similarly, partners and investors may reconsider their long-term collaborations or investments due to concerns about the organization’s overall security posture. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleimport requests token = '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' # Noncompliant response = requests.get('https://test-shop.myshopify.com/admin/api/2021-07/shop.json', headers={ 'X-Shopify-Access-Token': token, 'Content-Type': 'application/json' }) Compliant solutionimport requests token = os.getenv('SHOPIFY_ACCESS_TOKEN') response = requests.get('https://test-shop.myshopify.com/admin/api/2021-07/shop.json', headers={ 'X-Shopify-Access-Token': token, 'Content-Type': 'application/json' }) ResourcesDocumentationShopify.dev docs - Access tokens for custom apps in the Shopify admin Standards |
||||||||||||
secrets:S6337 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this {secret_type} to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("ibm-key", "fDKU7e_u_EnQgWgDVO4b_ubGqVTa5IYwWEey7lMfEB_1") // Noncompliant Compliant solutionprops.set("ibm-key", System.getenv("IBM_KEY")) ResourcesStandards |
||||||||||||
secrets:S6338 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Azure Storage Account Keys are used to authenticate and authorize access to Azure Storage resources, such as blobs, queues, tables, and files. These keys are used to authenticate requests made against the storage account. If an Azure Storage Account Key is leaked to an unintended audience, it can pose a significant security risk to your Azure Storage account. An attacker with access to your storage account key can potentially access and modify all the data stored in your storage account. They can also create new resources, delete existing ones, and perform other actions that can compromise the integrity and confidentiality of your data. In addition, an attacker with access to your storage account key can also incur charges on your account by creating and using resources, which can result in unexpected billing charges. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleusing Azure.Storage.Blobs; using Azure.Storage; class Example { static void Main(string[] args) { string account = "accountname"; string accountKey = "4dVw+l0W8My+FwuZ08dWXn+gHxcmBtS7esLAQSrm6/Om3jeyUKKGMkfAh38kWZlItThQYsg31v23A0w/uVP4pg=="; // Noncompliant StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(account, accountKey); BlobServiceClient blobServiceClient = new BlobServiceClient( new Uri($"https://{account}.blob.core.windows.net"), sharedKeyCredential); } } Compliant solutionUsing environment variables: using System; using Azure.Storage.Blobs; using Azure.Storage; class Example { static void Main(string[] args) { string account = Environment.GetEnvironmentVariable("ACCOUNT_NAME"); string accountKey = Environment.GetEnvironmentVariable("ACCOUNT_KEY"); StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(account, accountKey); BlobServiceClient blobServiceClient = new BlobServiceClient( new Uri($"https://{account}.blob.core.windows.net"), sharedKeyCredential); } } Using a passwordless approach, thanks to DefaultAzureCredential: using System; using Azure.Storage.Blobs; using Azure.Identity; class Example { static void Main(string[] args) { string account = Environment.GetEnvironmentVariable("ACCOUNT_NAME"); var blobServiceClient = new BlobServiceClient( new Uri($"https://{account}.blob.core.windows.net"), new DefaultAzureCredential()); } } ResourcesStandards
Documentation
|
||||||||||||
secrets:S6684 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Azure Subscription Keys are used to authenticate and authorize access to Azure resources and services. These keys are unique identifiers that are associated with an Azure subscription and are used to control access to resources such as virtual machines, storage accounts, and databases. Subscription keys are typically used in API requests to Azure services, and they help ensure that only authorized users and applications can access and modify resources within an Azure subscription. If an Azure Subscription Key is leaked to an unintended audience, it can pose a significant security risk to the Azure subscription and the resources it contains. An attacker who gains access to a subscription key can use it to authenticate and access resources within the subscription, potentially causing data breaches, data loss, or other malicious activities. Depending on the level of access granted by the subscription key, an attacker could potentially create, modify, or delete resources within the subscription, or even take control of the entire subscription. This could result in significant financial losses, reputational damage, and legal liabilities for the organization that owns the subscription. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Microsoft Azure provides an activity log that can be used to audit the access to the API. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("subscription_key", "efbb1a98f026d061464af685cd16dcd3") // Noncompliant Compliant solutionprops.set("subscription_key", System.getenv("SUBSCRIPTION_KEY")) ResourcesStandards
Documentation
|
||||||||||||
secrets:S6687 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?If a Django secret key leaks to an unintended audience, it can have serious security implications for the corresponding application. The secret key is used to sign cookies and other sensitive data so that an attacker could potentially use it to perform malicious actions. For example, an attacker could use the secret key to create their own cookies that appear to be legitimate, allowing them to bypass authentication and gain access to sensitive data or functionality. In the worst-case scenario, an attacker could be able to execute arbitrary code on the application and take over its hosting server. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. In Django, changing the secret value is sufficient to invalidate any data that it protected. It is important to not add the revoked secret to the
Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleSECRET_KEY = 'r&lvybzry1*k+qq)=x-!=0yd5l5#1gxzk!82@ru25*ntos3_9^' Compliant solutionimport os SECRET_KEY = os.environ["SECRET_KEY"] ResourcesStandards
Documentation
|
||||||||||||
secrets:S6688 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?A Facebook application secret key is a unique authentication token assigned to a Facebook application. It is used to authenticate and authorize the application to access Facebook’s APIs and services, such as:
Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. Phishing and spamAn attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("facebook_secret", "a569a8eee3802560e1416edbc4ee119d") // Noncompliant Compliant solutionprops.set("facebook_secret", System.getenv("FACEBOOK_SECRET")) ResourcesStandards
Documentation
|
||||||||||||
secrets:S6697 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Passwords in MySQL are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data. If a MySQL password leaks to an unintended audience, it can have serious consequences for the security of your database, the data stored within it and the applications that rely on it. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Security downgradeApplications relying on a MySQL database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise. For example, if the MySQL instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. General-purpose MySQL log files contain information about user authentication. They can be used to audit malicious use of password-leak-affected accounts. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Never hard-code secrets, not even the default values It is important that you do not hard-code secrets, even default values. First, hard-coded default secrets are often short and can be easily compromised even by attackers who do not have access to the code base. Second, hard-coded default secrets can cause problems if they need to be changed or replaced. And most importantly, there is always the possibility to accidentally set default secrets for production services, which can lead to security vulnerabilities and make production insecure by default. To minimize these risks, it is recommended to apply the above strategies, even for the default settings. Code examplesNoncompliant code exampleuri = "mysql://foouser:foopass@example.com/testdb" Compliant solutionimport os user = os.environ["MYSQL_USER"] password = os.environ["MYSQL_PASSWORD"] uri = f"mysql://{user}:{password}@example.com/testdb" ResourcesStandards |
||||||||||||
secrets:S6720 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Zapier webhook URLs have different effects depending on their permissions: They can be used only to write simple messages in instant messaging apps or trigger other advanced workflows. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this {secret_type} to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Chaining of vulnerabilitiesTriggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise. Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("zapier_webhook_url", "https://hooks.zapier.com/hooks/catch/3017724/t0q8ed/") // Noncompliant Compliant solutionprops.set("zapier_webhook_url", System.getenv("ZAPIER_WEBHOOK_URL")) ResourcesStandards |
||||||||||||
secrets:S6721 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Teams Workflow webhook URLs have different effects depending on their permissions: They can be used only to write Teams posts or to trigger other workflows. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this {secret_type} to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Chaining of vulnerabilitiesTriggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise. Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("teams_webhook_url", "https://sonarcompany.webhook.office.com/webhookb2/52feb105-fe74-52b9-8e90-5d165916fe22@61c6aa5a3-6531-4e28-9c0b-33ba1a8aa1ff/IncomingWebhook/f7fb2308e5f14431ace5b7cd0e670e42/4563618c-b03b-4e80-b093-28bb4ff11de8") // Noncompliant Compliant solutionprops.set("teams_webhook_url", System.getenv("TEAMS_WEBHOOK_URL")) ResourcesStandards |
||||||||||||
secrets:S6733 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Chaining of vulnerabilitiesTriggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise. Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Account terminationUnauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure. The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("airtable_key", "key6yLyCekATg67Ts") // Noncompliant Compliant solutionprops.set("airtable_key", System.getenv("AIRTABLE_KEY")) ResourcesStandards |
||||||||||||
secrets:S6736 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?AMQP URLs containing credentials allow publishing and consuming messages from the queue. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the credentials. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("amqp-url", "amqps://admin:password@example.com:8080/example") // Noncompliant Compliant solutionprops.set("amqp-url", System.getenv("amqps://"+System.getenv("AMQP_CREDENTIALS")+"@example.com:8080/example")) ResourcesStandards |
||||||||||||
secrets:S6760 |
Yandex Cloud is a complete platform that provides services such as virtual machines, cloud storage, API gateways, and private networks, to name a few. In Yandex Cloud, users are authenticated using secret keys and tokens. If one of these secret is compromised, attackers will be able to perform any action on behalf of the account or user associated with this secret. Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. If an attacker gains access to a Yandex token or key, they might be able to compromise your Yandex Cloud environment. This includes control over any applications or services that are running, as well as data that are managed by the account. What is the potential impact?If an attacker manages to gain access to the Yandex Cloud environment, there exist several ways that they could seriously harm your organization. Any data that is stored in the environment could be leaked, and the environment itself could even be tampered with. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Infrastructure takeoverBy obtaining a leaked secret, an attacker can gain control over your organization’s Yandex Cloud infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining. This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions. Furthermore, corporate Yandex Cloud infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data and to cause more damage to the overall infrastructure. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleimport { Session, cloudApi, serviceClients } from '@yandex-cloud/nodejs-sdk'; const { resourcemanager: { cloud_service: { ListCloudsRequest } } } = cloudApi; const session = new Session({ iamToken: 't1.7euelSbPyceKx87JqpuRl1qZiY-Ryi3rnpWaksrKaZqUppnLncmDnpeajZvl8_dZNAFl-e8ENXMH_t3z9xljfmT57wQ1cwf-.-LErty1vRh4S__VEp-aDnM5huB5MEfm_Iu1u2IzNgyrn0emiWDYA6rSQXDvzjE0O3HBbUlqoDeCmXYYInzZ6Cg' }); // Noncompliant const cloudService = session.client(serviceClients.CloudServiceClient); const response = await cloudService.list(ListCloudsRequest.fromPartial({ pageSize: 100, })); Compliant solutionimport { Session, cloudApi, serviceClients } from '@yandex-cloud/nodejs-sdk'; const { resourcemanager: { cloud_service: { ListCloudsRequest } } } = cloudApi; const session = new Session({ iamToken: process.env.YANDEX_TOKEN }); const cloudService = session.client(serviceClients.CloudServiceClient); const response = await cloudService.list(ListCloudsRequest.fromPartial({ pageSize: 100, })); ResourcesDocumentationStandards |
||||||||||||
secrets:S6764 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. If attackers gain access to your WakaTime OAuth token or secret, they can potentially use it to make unauthorized requests to the WakaTime API on your behalf. What is the potential impact?Attackers exploiting leaked WakaTime OAuth tokens or secrets can potentially access sensitive information, modify data, or perform actions on behalf of the user without their consent. The exact capabilities of the attackers will depend on the authorizations the corresponding application has been granted. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code examplefrom rauth import OAuth2Service service = OAuth2Service( client_id='d130uKF73fueZSCM9tUodIFN', client_secret='waka_sec_ez0kI3tQlYVvYSJOAjoI5n3PpyG69HQl91TZKFjSdb0X0XXgY7dahXiPpAhYL2kNxqDBzHuHNuzCPr5d', # Noncompliant name='wakatime', authorize_url='https://wakatime.com/oauth/authorize', access_token_url='https://wakatime.com/oauth/token', base_url='https://wakatime.com/api/v1/') Compliant solutionimport os from rauth import OAuth2Service service = OAuth2Service( client_id=os.environ['WAKA_CLIENT_ID'], client_secret=os.environ['WAKA_CLIENT_SECRET'], name='wakatime', authorize_url='https://wakatime.com/oauth/authorize', access_token_url='https://wakatime.com/oauth/token', base_url='https://wakatime.com/api/v1/') ResourcesDocumentationWakaTime API Documentation - WakaTime API Authenticationb Standards |
||||||||||||
secrets:S6765 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. If an attacker gains access to a Figma personal access token, they might be able to compromise the data that is accessible to the linked Figma account. By doing so, it might be possible for confidential data to be leaked by the attacker. What is the potential impact?Below are some real-world scenarios that may occur when a malicious entity manages to retrieve a leaked Figma personal access token. Compromise of business-critical dataAn attacker can use a personal access token to gain unauthorized access to your company’s Figma projects and designs. This can include confidential client data, proprietary design assets, or any other intellectual property stored in Figma. With unauthorized access, the attacker can download and share this sensitive data, potentially leading to data breaches, intellectual property theft, or other forms of unauthorized disclosure. Unauthorized actions in Figma environmentWith a leaked Figma personal access token, an attacker can perform various actions on behalf of your company within the Figma workspace. This can include creating, modifying, or deleting projects, files, or team members. By impersonating authorized users, the attacker can manipulate your company’s design assets or disrupt the design workflow. This can result in unauthorized changes and data loss. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleimport requests token = 'figd_OLDXZWOP4fxW4c9ER0xzxRda96M-f0eFwZpFQjHJ' # Noncompliant response = requests.get('https://api.figma.com/v1/me', headers={ 'X-FIGMA-TOKEN': token, 'Content-Type': 'application/json' }) Compliant solutionimport requests token = os.getenv('FIGMA_PERSONAL_ACCESS_TOKEN') response = requests.get('https://api.figma.com/v1/me', headers={ 'X-FIGMA-TOKEN': token, 'Content-Type': 'application/json' }) ResourcesDocumentationFigma Developers - Access tokens Standards |
||||||||||||
secrets:S6777 |
Shippo is a multi-carrier shipping platform that helps businesses streamline their shipping processes. It provides a unified API and dashboard that allows businesses to connect with multiple shipping carriers. Shippo API tokens are used for authentication and authorization purposes when making API requests. Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?If a Shippo API token is leaked, it can have several consequences: Financial LossIf the leaked API token is used to generate shipping labels or make shipping-related transactions, it can result in financial loss. Unauthorized individuals may exploit the token to generate fraudulent labels or make unauthorized shipments, leading to additional shipping costs or potential chargebacks. ==== Data Breach If the leaked API token is associated with a user account that has access to sensitive customer or business data, it can result in a data breach. This can lead to the exposure of personal information, shipping addresses, payment details, or other confidential data, potentially causing harm to your customers and your business reputation. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleShippo.setApiKey('shippo_live_258d9b4c41a8cb88ca7fb4b12c65083f658435ac'); // Noncompliant HashMap<String, Object> addressMap = new HashMap<String, Object>(); addressMap.put("name", "Mr. Hippo"); addressMap.put("company", "Shippo"); addressMap.put("street1", "215 Clayton St."); addressMap.put("city", "San Francisco"); addressMap.put("state", "CA"); addressMap.put("zip", "94117"); addressMap.put("country", "US"); addressMap.put("phone", "+1 555 341 9393"); addressMap.put("email", "support@goshipppo.com"); Address createAddress = Address.create(addressMap); Compliant solutionShippo.setApiKey(System.getenv("SHIPPO_API_TOKEN")); HashMap<String, Object> addressMap = new HashMap<String, Object>(); addressMap.put("name", "Mr. Hippo"); addressMap.put("company", "Shippo"); addressMap.put("street1", "215 Clayton St."); addressMap.put("city", "San Francisco"); addressMap.put("state", "CA"); addressMap.put("zip", "94117"); addressMap.put("country", "US"); addressMap.put("phone", "+1 555 341 9393"); addressMap.put("email", "support@goshipppo.com"); Address createAddress = Address.create(addressMap); ResourcesStandards |
||||||||||||
secrets:S6334 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Google API keys are used to authenticate applications that consume Google Cloud APIs. API keys are not strictly secret as they are often embedded into client-side code or mobile applications that consume Google Cloud APIs. Still, they should be secured. Financial lossAn unrestricted Google API key being disclosed in a public source code could be used by malicious actors to consume Google APIs on behalf of your
application. Denial of serviceIf your account has enabled quota to cap the API consumption of your application, this quota can be exceeded, leaving your application unable to request the Google APIs it requires to function properly. How to fix itDepending on the sensitivity of the key use, only administrators should have access to the Google API keys used by your application. For client-facing keys If the key must be sent to clients for the service to run properly, then it does not need to be revoked or added to a Vault, ignore the following
sections. These best practices will help mitigate abuse of this key. Revoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("google-api-key", "zAIJf4Six4MjGwxvkarrf1LPUaCdyNSjzsyIoRI") // Noncompliant Compliant solutionprops.set("google-api-key", System.getenv("GOOGLE_API_KEY")) ResourcesStandards |
||||||||||||
secrets:S6335 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleHere is an example of a service account key file. In general it is in the form of a json file as demonstrated in the GCP docs. { "type": "service_account", "project_id": "example-project", "private_key_id": "2772b8e6f42dc67369b98f0b91694f7805b28844", "private_key": "-----BEGIN PRIVATE KEY-----\nKBww9jggAgBEHBCBAASIMDsoCBAuAQINAgFAGSXQTkiAE0cEIkoQghJAqGavB/r3\n2W6raHa1Qrfj6pii5U2Ok53SxCyK3TxYc3Bfxq8orZeYC9LQ/I3tz7w4/BnT71AD\nfP1i8SWHsRMIicSuVFcRoYMA+A1eNSmdrujdBNWgedfuSyHbPnNY7s8BBUIoBN7I\n8gJG5DUUKAZfZDB2c/n7Yu0=\n-----END PRIVATE KEY-----\n", "client_email": "example@example.iam.gserviceaccount.example.com", "client_id": "492539091821492546176", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/example%40example.iam.gserviceaccount.example.com", "universe_domain": "googleapis.com" } Compliant solutionAlways avoid committing service account key files to public systems. Use any ResourcesStandards |
||||||||||||
secrets:S6336 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. This rule flags instances of:
What is the potential impact?AccessKeys are long term credentials designed to authenticate and authorize requests to Alibaba Cloud. If your application interacts with Alibaba Cloud then it requires AccessKeys to access all the resources it needs to function properly. Resources
that can be accessed depend on the permissions granted to the Alibaba Cloud account. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. How to fix itOnly administrators should have access to the AccessKeys used by your application. Revoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("alibaba-key", "LTAI5tBcc9SecYAo") // Noncompliant Compliant solutionprops.set("alibaba-key", System.getenv("ALIBABA_KEY")) ResourcesStandards |
||||||||||||
secrets:S6696 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?SendGrid keys are used for authentication and authorization when using the SendGrid email delivery service. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. Phishing and spamAn attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Account terminationUnauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure. The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("sg_key", "SG.Wjo5QoWqTmrFtMUf8m2T.CIY0Z24e5sJawIymiK_ZKC_7I15yDP0ur1yt0qtkR9Go") // Noncompliant Compliant solutionprops.set("sg_key", System.getenv("SG_KEY")) ResourcesStandards
Documentation
|
||||||||||||
secrets:S6698 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Passwords in PostgreSQL are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data. If a PostgreSQL password leaks to an unintended audience, it can have serious consequences for the security of your database, the data stored within it, and the applications that rely on it. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Security downgradeApplications relying on a PostgreSQL database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise. For example, if the PostgreSQL instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. By default, no connection information is logged by PostgreSQL server. The Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Never hard-code secrets, not even the default values It is important that you do not hard-code secrets, even default values. First, hard-coded default secrets are often short and can be easily compromised even by attackers who do not have access to the code base. Second, hard-coded default secrets can cause problems if they need to be changed or replaced. And most importantly, there is always the possibility to accidentally set default secrets for production services, which can lead to security vulnerabilities and make production insecure by default. To minimize these risks, it is recommended to apply the above strategies, even for the default settings. Code examplesNoncompliant code exampleuri = "postgres://foouser:foopass@example.com/testdb" Compliant solutionimport os user = os.environ["PG_USER"] password = os.environ["PG_PASSWORD"] uri = f"postgres://{user}:{password}@example.com/testdb" ResourcesStandards
Documentation
|
||||||||||||
secrets:S6699 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?The Spotify API secret is a confidential key used for authentication and authorization purposes when accessing the Spotify API. The Spotify API grants applications access to Spotify’s services and, by extension, user data. Should this secret fall into the wrong hands, two immediate concerns arise: unauthorized access to user data and data manipulation. When unauthorized entities obtain the API secret, they have potential access to users' personal Spotify information. This includes the details of their playlists, saved tracks, and listening history. Such exposure might not only breach personal boundaries but also infringe upon privacy standards set by platforms and regulators. In addition to simply gaining access, there is the risk of data manipulation. If malicious individuals obtain the secret, they could tamper with user content on Spotify. This includes modifying playlists, deleting beloved tracks, or even adding unsolicited ones. Such actions not only disrupt the user experience but also violate the trust that users have in both Spotify and third-party applications connected to it. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("spotify_secret", "f3fbd32510154334aaf0394aca3ac4c3") // Noncompliant Compliant solutionprops.set("spotify_secret", System.getenv("SPOTIFY_SECRET")) ResourcesStandards |
||||||||||||
secrets:S6731 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Slack bot tokens have multiple types of access to a channel: They can post messages, read usernames and users emails. These links have plethora of possible capabilities. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this token to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("slack_bot_token", "xoxb-592666205443-2542034435697-FM7vdsq184d0G5vBNiOq8MSF8t7") // Noncompliant Compliant solutionprops.set("slack_bot_token", System.getenv("SLACK_BOT_TOKEN")) ResourcesStandards |
||||||||||||
secrets:S6732 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Breach of trust in non-repudiation and disruption of the audit trailWhen such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity. All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications. Financial lossSince this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or
the account is tampered with. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("stripe_key", "sk_live_kiSSAXe2IyGNvprHode7efRT") // Noncompliant Compliant solutionprops.set("stripe_key", System.getenv("STRIPE_KEY")) ResourcesStandards |
||||||||||||
secrets:S6739 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the credentials. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("redis-url", "rediss://admin:password@example.com:8080/example") // Noncompliant Compliant solutionprops.set("redis-url", System.getenv("REDIS_URL")) ResourcesStandards |
||||||||||||
secrets:S6773 |
HashiCorp Vault is a popular open-source tool used for securely storing and accessing sensitive data such as passwords, API keys, certificates, and encryption keys. It provides a centralized solution for managing secrets and helps organizations enforce security best practices. Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?If a HashiCorp Vault token is compromised, it can have serious consequences for the security of the system and the sensitive data stored within the Vault. Here are some potential consequences: Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. Breach of trust in non-repudiation and disruption of the audit trailWhen such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity. All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications. Application’s security downgradeA downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component. For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleimport hvac client = hvac.Client(url='https://vault.example.com', token='hvb.AAAAAQJyBEVF-vTWUrg0hcoIPuvKjjNxXXZ5MfsYVg2gJ0fGZpVi0IGTFfh4TqsoQIWaocNRXD1qzGXvhIHWJBM_rWU9YJY8sXOYVy_s1JAHasXJwGmZ_fBLJfSG6aCwQkCGwtAhYw') # Noncompliant secret = client.secrets.kv.v2.read_secret_version(path='secret/myapp') data = secret['data'] username = data.get('username') password = data.get('password') Compliant solutionimport hvac client = hvac.Client(url='https://vault.example.com', token=os.environ.get('VAULT_TOKEN')) secret = client.secrets.kv.v2.read_secret_version(path='secret/myapp') data = secret['data'] username = data.get('username') password = data.get('password') ResourcesDocumentationHashicorp API Documentation - Tokens Hashicorp API Tutorial - Tokens Hashicorp API Tutorial - Batch tokens Standards |
||||||||||||
secrets:S6290 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. This rule detects the following leaks:
What is the potential impact?Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Phishing and spamAn attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("aws-secret-access-key", "kHeUAwnSUizTWpSbyGAz4f+As5LshPIjvtpswqGb") // Noncompliant Compliant solutionprops.set("aws-secret-access-key", System.getenv("AWS_SECRET_ACCESS_KEY")) ResourcesStandards |
||||||||||||
secrets:S6292 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?If your application interacts with Amazon MWS then it requires credentials to access all the resources it needs to function properly. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Financial lossSince this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or
the account is tampered with. Phishing and spamAn attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. Account terminationUnauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure. The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners. How to fix itOnly administrators should have access to the MWS credentials used by your application. Revoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("mws-key", "amzn.mws.3b8be74a-5f63-5770-5bad-19bd40c0ac65") // Noncompliant Compliant solutionprops.set("mws-key", System.getenv("MWS_KEY")) ResourcesStandards |
||||||||||||
secrets:S6690 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?GitLab tokens are used for authentication and authorization purposes. They are essentially access credentials that allow users or applications to interact with the GitLab API. With a GitLab token, you can perform various operations such as creating, reading, updating, and deleting resources like repositories, issues, merge requests, and more. Tokens can also be scoped to limit the permissions and actions that can be performed. A leaked GitLab token can have significant consequences for the security and integrity of the associated account and resources. It exposes the account to unauthorized access, potentially leading to data breaches and malicious actions. The unintended audience can exploit the leaked token to gain unauthorized entry into the GitLab account, allowing them to view, modify, or delete repositories, issues, and other resources. This unauthorized access can result in the exposure of sensitive data, such as proprietary code, customer information, or confidential documents, leading to potential data breaches. Moreover, the unintended audience can perform malicious actions within the account, introducing vulnerabilities, injecting malicious code, or tampering with settings. This can compromise the security of the account and the integrity of the software development process. Additionally, a leaked token can enable the unintended audience to take control of the GitLab account, potentially changing passwords, modifying settings, and adding or removing collaborators. This account takeover can disrupt development and collaboration workflows, causing reputational damage and operational disruptions. Furthermore, the impact of a leaked token extends beyond the immediate account compromise. It can have regulatory and compliance implications, requiring organizations to report the breach, notify affected parties, and potentially face legal and financial consequences. In general, the compromise of a GitLab token would lead to consequences referred to as supply chain attacks that can affect more than one’s own organization. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("token", "glpat-zcs1FfaxGnHfvzd7ExHz") // Noncompliant Compliant solutionprops.set("token", System.getenv("TOKEN")) ResourcesStandards |
||||||||||||
secrets:S6691 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?A Google client OAuth secret is a confidential string that is used to authenticate and authorize applications when they interact with Google APIs. It is a part of the OAuth 2.0 protocol, which allows applications to access user data on their behalf. The client secret is used in the OAuth flow to verify the identity of the application and ensure that only authorized applications can access user data. It is typically used in combination with a client ID, which identifies the application itself. If a Google client OAuth secret leaks to an unintended audience, it can have serious security implications. Attackers who obtain the client secret can use it to impersonate the application and gain unauthorized access to user data. They can potentially access sensitive information, modify data, or perform actions on behalf of the user without their consent. The exact capabilities of the attackers will depend on the authorizations the corresponding application has been granted. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Google Cloud console provides a Logs Explorer feature that can be used to audit recent access to a cloud infrastructure. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("client_secret", "TgxYWFmND-1NTYwNTgzMDM3N") // Noncompliant Compliant solutionprops.set("client_secret", System.getenv("CLIENT_SECRET")) ResourcesStandards
Documentation
|
||||||||||||
secrets:S6692 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?A reCaptcha secret key is a unique token that is used to verify the authenticity of reCaptcha requests made from an application to the reCaptcha service. It is a key component in ensuring CAPTCHAs challenges issued by the application are properly solved and verified. If a reCaptcha secret key leaks to an unintended audience, attackers with access to it will be able to forge CAPTCHA responses without solving them. It will allow them to bypass the CAPTCHA challenge verification. This can lead to an influx of spam submissions, automated attacks, or unauthorized access attempts depending on the feature the CAPTCHA mechanism is intended to protect. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("recaptcha_secret", "6LcaQa4mAAAAAFvhmzAd2hErGBSt4FC-BPzm4cNS") // Noncompliant Compliant solutionprops.set("recaptcha_secret", System.getenv("RECAPTCHA_SECRET")) ResourcesStandards
Documentation
|
||||||||||||
secrets:S6693 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?SSH private keys are used for authentication and secure communication in SSH (Secure Shell) protocols. They are a form of asymmetric cryptography, where a pair of keys is generated: a private key and a corresponding public key. SSH keys provide a secure and efficient way to authenticate and establish secure connections between clients and servers. They are widely used for remote login, file transfer, and secure remote administration. When an SSH private key is leaked to an unintended audience, it can have severe consequences for security and confidentiality. One of the primary outcomes is unauthorized access. The unintended audience can exploit the leaked private key to authenticate themselves as the legitimate owner, gaining unauthorized entry to systems, servers, or accounts that accept the key for authentication. This unauthorized access opens the door for various malicious activities, including data breaches, unauthorized modifications, and misuse of sensitive information. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Depending on the information system the key is used to authenticate against, the audit method might change. For example, on Linux systems, the system-wide authentication logs could be used to audit recent connections from an affected account. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleString key = """ -----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW QyNTUxOQAAACDktj2RM1D2wRTQ0H+YZsFqnAuZrqBNEB4PpJ5xm73nWwAAAJgJVPFECVTx RAAAAAtzc2gtZWQyNTUxOQAAACDktj2RM1D2wRTQ0H+YZsFqnAuZrqBNEB4PpJ5xm73nWw AAAECQ8Nzp6a1ZJgS3SWh2pMxe90W9tZVDZ+MZT35GjCJK2uS2PZEzUPbBFNDQf5hmwWqc C5muoE0QHg+knnGbvedbAAAAFGdhZXRhbmZlcnJ5QFBDLUwwMDc3AQ== -----END OPENSSH PRIVATE KEY-----"""; Compliant solutionString key = System.getenv("SSH_KEY"); ResourcesStandards |
||||||||||||
secrets:S6694 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?Passwords in MongoDB are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data. If a MongoDB password leaks to an unintended audience, it can have serious consequences for the security of your database, the data stored within it, and the applications that rely on it. Compromise of sensitive dataIf the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. Security downgradeApplications relying on a MongoDB database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise. For example, if the MongoDB instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. MongoDB instances maintain a log that includes user authentication events. This one could be used to audit recent malicious connections. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleuri = "mongodb://foouser:foopass@example.com/testdb" Compliant solutionimport os user = os.environ["MONGO_USER"] password = os.environ["MONGO_PASSWORD"] uri = f"mongodb://{user}:{password}@example.com/testdb" ResourcesStandards
Documentation
|
||||||||||||
secrets:S6695 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?WeChat application keys are used for authentication and authorization purposes when integrating third-party applications with the WeChat platform. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret. Compromise of sensitive personal dataThis kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users
have shared on the platform. This is called In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws. Phishing and spamAn attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker. Spam can cause users to be exposed to the following:
Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website. Malware distributionDue to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems. WeChat exploitationFurthermore, the leaked app key could enable unauthorized parties to manipulate or disrupt the functionality of the WeChat app. They could tamper with app settings, inject malicious code, or even take control of the app’s user base. Such actions could result in a loss of user trust, service disruptions, and reputational damage for both the app developer and the WeChat platform. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleprops.set("secret_key", "40b6b70508b47cbfb4ee39feb617a05a") // Noncompliant Compliant solutionprops.set("secret_key", System.getenv("SECRET_KEY")) ResourcesStandards |
||||||||||||
secrets:S6771 |
Postman is an API development and testing platform that allows developers to design, build, and test APIs. Postman tokens are used for authentication and authorization purposes when making API requests. Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?If a Postman token is leaked or compromised, it can lead to several security issues and risks. Here are some potential consequences: Unauthorized accessAn attacker who gains access to a leaked token can use it to impersonate the legitimate user or application associated with the token. This can result in unauthorized access to sensitive data or functionality within the API. Data breachesIf the leaked token provides access to sensitive data, an attacker can use it to retrieve or manipulate that data. This can lead to data breaches that compromise the confidentiality and integrity of the information. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes. In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed. API abuseWith a leaked token, an attacker can abuse the API by making unauthorized requests, consuming excessive resources, or performing malicious actions. This can disrupt the API’s regular operation, impact performance, or even cause denial-of-service (DoS) attacks. Privilege escalationDepending on the permissions and scope associated with the token, an attacker may be able to escalate their privileges within the API. They can gain access to additional resources or perform actions that they are not authorized to do. Breach of trust in non-repudiation and disruption of the audit trailWhen such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity. All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications. Reputation damageIf a token is leaked and used for malicious purposes, it can damage the reputation of the API provider. Users may lose trust in the security of the API, leading to a loss of business and credibility. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code exampleconst axios = require('axios'); const apiKey = 'PMAK-6502e63761882f002a69f0cb-6d9bc58cd0cc60ff5547f81cf2ca141bb9'; // Noncompliant const options = { method: 'get', url: 'https://api.getpostman.com/me', headers: { 'Content-Type': 'application/json', 'X-API-Key': apiKey } }; (async() => { await axios(options); })(); Compliant solutionconst axios = require('axios'); const apiKey = process.env.POSTMAN_API_KEY; const options = { method: 'get', url: 'https://api.getpostman.com/me', headers: { 'Content-Type': 'application/json', 'X-API-Key': apiKey } }; (async() => { await axios(options); })(); ResourcesDocumentationArticles & blog postsHow to Get Started with the Postman API Standards |
||||||||||||
kotlin:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleval params = "password=xxxx" // Sensitive val writer = OutputStreamWriter(getOutputStream()) writer.write(params) writer.flush() ... val password = "xxxx" // Sensitive ... Compliant Solutionval params = "password=${retrievePassword()}" val writer = OutputStreamWriter(getOutputStream()) writer.write(params) writer.flush() ... val password = retrievePassword() ... See
|
||||||||||||
kotlin:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code ExampleThese clients from Apache commons net libraries are based on unencrypted protocols and are not recommended: val telnet = TelnetClient(); // Sensitive val ftpClient = FTPClient(); // Sensitive val smtpClient = SMTPClient(); // Sensitive Unencrypted HTTP connections, when using okhttp library for instance, should be avoided: val spec: ConnectionSpec = ConnectionSpec.Builder(ConnectionSpec.CLEARTEXT) // Sensitive .build() Android WebView can be configured to allow a secure origin to load content from any other origin, even if that origin is insecure (mixed content): import android.webkit.WebView val webView: WebView = findViewById(R.id.webview) webView.getSettings().setMixedContentMode(MIXED_CONTENT_ALWAYS_ALLOW) // Sensitive Compliant SolutionUse instead these clients from Apache commons net and JSch/ssh library: JSch jsch = JSch(); if(implicit) { // implicit mode is considered deprecated but offer the same security than explicit mode val ftpsClient = FTPSClient(true); } else { val ftpsClient = FTPSClient(); } if(implicit) { // implicit mode is considered deprecated but offer the same security than explicit mode val smtpsClient = SMTPSClient(true); } else { val smtpsClient = SMTPSClient(); smtpsClient.connect("127.0.0.1", 25); if (smtpsClient.execTLS()) { // commands } } Perform HTTP encrypted connections, with okhttp library for instance: val spec: ConnectionSpec =ConnectionSpec.Builder(ConnectionSpec.MODERN_TLS) .build() The most secure mode for Android WebView is import android.webkit.WebView val webView: WebView = findViewById(R.id.webview) webView.getSettings().setMixedContentMode(MIXED_CONTENT_NEVER_ALLOW) ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
kotlin:S6300 |
Storing files locally is a common task for mobile applications. Files that are stored unencrypted can be read out and modified by an attacker with physical access to the device. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to password-encrypt local files that contain sensitive information. The class EncryptedFile can be used to easily encrypt files. Sensitive Code Exampleval targetFile = File(activity.filesDir, "data.txt") targetFile.writeText(fileContent) // Sensitive Compliant Solutionval mainKey = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC) val encryptedFile = EncryptedFile.Builder( File(activity.filesDir, "data.txt"), activity, mainKey, EncryptedFile.FileEncryptionScheme.AES256_GCM_HKDF_4KB ).build() encryptedFile.openFileOutput().apply { write(fileContent) flush() close() } See
|
||||||||||||
kotlin:S6301 |
When storing local data in a mobile application, it is common to use a database that can be encrypted. When encryption of this database is enabled, the encryption key must be protected properly. Why is this an issue?Mobile applications often need to store data (which might be sensitive) locally. For Android, there exist several libraries that simplify this process by offering a feature-rich database system. SQLCipher and Realm are examples of such libraries. These libraries often add support for database encryption, to protect the contents from being read by other apps or by attackers. When using encryption for such a database, it is important that the encryption key stays secret. If this key is hardcoded in the application, then it should be considered compromised. The key will be known by anyone with access to the application’s binary code or source code. This means that the sensitive encrypted data can be decrypted by anyone having access to the binary of the mobile application. Furthermore, if the key is hardcoded, it is the same for every user. A compromise of this encryption key implicates every user of the app. The encryption key is meant to stay secret and should not be hard-coded in the application as it would mean that: What is the potential impact?If an attacker is able to find the encryption key for the mobile database, this can potentially have severe consequences. Theft of sensitive dataIf a mobile database is encrypted, it is likely to contain data that is sensitive for the user or the app publisher. For example, it can contain personally identifiable information (PII), financial data, login credentials, or other sensitive user data. By not protecting the encryption key properly, it becomes very easy for an attacker to recover it and then decrypt the mobile database. At that point, the theft of sensitive data might lead to identity theft, financial fraud, and other forms of malicious activities. How to fix it in RealmCode examplesIn the example below, a local database is opened using a hardcoded key. To fix this, the key is moved to a secure location instead and retrieved
using a Noncompliant code exampleval key = "gb09ym9ydoolp3w886d0tciczj6ve9kszqd65u7d126040gwy86xqimjpuuc788g" val config = RealmConfiguration.Builder() .encryptionKey(key.toByteArray()) // Noncompliant .build() val realm = Realm.getInstance(config) Compliant solutionval config = RealmConfiguration.Builder() .encryptionKey(getKey()) .build() val realm = Realm.getInstance(config) How does this work?Using Android’s builtin key storage optionsThe Android Keystore system allows apps to store encryption keys in a container that is protected on a system level. Additionally, it can restrict when and how the keys are used. For example, it allows the app to require user authentication (for example using a fingerprint) before the key is made available. This is the recommended way to store cryptographic keys on Android. Dynamically retrieving encryption keys remotelyAs user devices are less trusted than controlled environments such as the application backend, the latter should be preferred for the storage of encryption keys. This requires that a user’s device has an internet connection, which may not be suitable for every use case. Going the extra mileAvoid storing sensitive data on user devicesIn general, it is always preferable to store as little sensitive data on user devices as possible. Of course, some sensitive data always has to be stored on client devices, such as the data required for authentication. In this case, consider whether the application logic can also function with a hash (or otherwise non-reversible form) of that data. For example, if an email address is required for authentication, it might be possible to use and store a hashed version of this address instead. ResourcesDocumentation
Standards
|
||||||||||||
kotlin:S6432 |
When encrypting data using AES-GCM or AES-CCM, it is essential not to reuse the same initialization vector (IV, also called nonce) with a given key. To prevent this, it is recommended to either randomize the IV for each encryption or increment the IV after each encryption. Why is this an issue?When encrypting data using a counter (CTR) derived block cipher mode of operation, it is essential not to reuse the same initialization vector (IV) for a given key. An IV that complies with this requirement is called a "nonce" (number used once). Galois/Counter (GCM) and Counter with Cipher Block Chaining-Message Authentication Code (CCM) are both derived from counter mode. When using AES-GCM or AES-CCM, a given key and IV pair will create a "keystream" that is used to encrypt a plaintext (original content) into a ciphertext (encrypted content.) For any key and IV pair, this keystream is always deterministic. Because of this property, encrypting several plaintexts with one key and IV pair can be catastrophic. If an attacker has access to one plaintext and its associated ciphertext, they are able to decrypt everything that was created using the same pair. Additionally, IV reuse also drastically decreases the key recovery computational complexity by downgrading it to a simpler polynomial root-finding problem. This means that even without access to a plaintext/ciphertext pair, an attacker may still be able to decrypt all the sensitive data. What is the potential impact?If the encryption that is being used is flawed, attackers might be able to exploit it in several ways. They might be able to decrypt existing sensitive data or bypass key protections. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By not using the encryption algorithm correctly, the likelihood that an attacker might be able to recover the original sensitive data drastically increases. Additional attack surfaceEncrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. If an attacker is able to modify the cleartext of the encrypted message, it might be possible to trigger other vulnerabilities in the code. How to fix it in Java Cryptography ExtensionCode examplesThe example uses a hardcoded IV as a nonce, which causes AES-CCM to be insecure. To fix it, a nonce is randomly generated instead. Noncompliant code examplefun encrypt(key: ByteArray, ptxt: ByteArray) { val iv = "7cVgr5cbdCZV".toByteArray() val cipher = Cipher.getInstance("AES/GCM/NoPadding") val keySpec = SecretKeySpec(key, "AES") val gcmSpec = GCMParameterSpec(128, iv) cipher.init(Cipher.ENCRYPT_MODE, keySpec, gcmSpec) // Noncompliant } Compliant solutionfun encrypt(key: ByteArray, ptxt: ByteArray) { val random = SecureRandom() val iv = ByteArray(12) random.nextBytes(iv) val cipher = Cipher.getInstance("AES/GCM/NoPadding") val keySpec = SecretKeySpec(key, "AES") val gcmSpec = GCMParameterSpec(128, iv) cipher.init(Cipher.ENCRYPT_MODE, keySpec, gcmSpec) } How does this work?For AES-GCM and AES-CCM, NIST recommends generating a nonce using either a deterministic approach or using a 'Random Bit Generator (RBG)'. Generating nonces using random number generationWhen using a randomized approach, NIST recommends a nonce of at least 96 bits using a cryptographically secure pseudorandom number generator (CSPRNG.) Such a generator can create output with a sufficiently low probability of the same number being output twice (also called a collision) for a long time. However, after 232 generated numbers for the same key, NIST recommends rotating this key for a new one. After that amount of generated numbers, the probability of a collision is high enough to be considered insecure. The code example above demonstrates how CSPRNGs can be used to generate nonces. Be careful to use a random number generator that is sufficiently secure. Default (non-cryptographically secure) RNGs might be more prone to collisions in their output, which is catastrophic for counter-based encryption modes. Deterministically generating noncesOne method to prevent the same IV from being used multiple times for the same key is to update the IV in a deterministic way after each encryption. The most straightforward deterministic method for this is a counter. The way this works is simple: for any key, the first IV is the number zero. After this IV is used to encrypt something with a key, it is incremented for that key (and is now equal to 1). Although this requires additional bookkeeping, it should guarantee that for each encryption key, an IV is never repeated. For a secure implementation, NIST suggests generating these nonces in two parts: a fixed field and an invocation field. The fixed field should be used to identify the device executing the encryption (for example, it could contain a device ID), such that for one key, no two devices can generate the same nonce. The invocation field contains the counter as described above. For a 96-bit nonce, NIST recommends (but does not require) using a 32-bit fixed field and a 64-bit invocation field. Additional details can be found in the NIST Special Publication 800-38D. ResourcesStandards
|
||||||||||||
kotlin:S3329 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV). If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, a company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Java Cryptographic ExtensionCode examplesNoncompliant code exampleimport java.nio.charset.StandardCharsets import java.security.InvalidAlgorithmParameterException import java.security.InvalidKeyException import java.security.NoSuchAlgorithmException import javax.crypto.Cipher import javax.crypto.NoSuchPaddingException import javax.crypto.spec.GCMParameterSpec import javax.crypto.spec.SecretKeySpec fun encrypt(key: String, plainText: String) { val randomBytes = "7cVgr5cbdCZVw5WY".toByteArray(StandardCharsets.UTF_8) val iv = GCMParameterSpec(128, randomBytes) val keySpec = SecretKeySpec(key.toByteArray(StandardCharsets.UTF_8), "AES") try { val cipher = Cipher.getInstance("AES/CBC/NoPadding") cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv) // Noncompliant } catch (e: NoSuchAlgorithmException) { // ... } catch (e: InvalidKeyException) { // ... } catch (e: NoSuchPaddingException) { // ... } catch (e: InvalidAlgorithmParameterException) { // ... } } Compliant solutionIn this example, the code explicitly uses a number generator that is considered strong. import java.nio.charset.StandardCharsets import java.security.SecureRandom import java.security.InvalidAlgorithmParameterException import java.security.InvalidKeyException import java.security.NoSuchAlgorithmException import javax.crypto.Cipher import javax.crypto.NoSuchPaddingException import javax.crypto.spec.GCMParameterSpec import javax.crypto.spec.SecretKeySpec fun encrypt(key: String, plainText: String) { val random = SecureRandom(); val randomBytes = ByteArray(16); random.nextBytes(randomBytes); val iv = GCMParameterSpec(128, randomBytes) val keySpec = SecretKeySpec(key.toByteArray(StandardCharsets.UTF_8), "AES") try { val cipher = Cipher.getInstance("AES/CBC/NoPadding") cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv) } catch (e: NoSuchAlgorithmException) { // ... } catch (e: InvalidKeyException) { // ... } catch (e: NoSuchPaddingException) { // ... } catch (e: InvalidAlgorithmParameterException) { // ... } } How does this work?Use unique IVsTo ensure high security, initialization vectors must meet two important criteria:
The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext. In the previous non-compliant example, the problem is not that the IV is hard-coded. ResourcesStandards
|
||||||||||||
kotlin:S4347 |
When using Why is this an issue?
This can have severe security implications for cryptographic operations that rely on the randomness of the generated numbers. By using a
predictable seed, an attacker can potentially guess or deduce the generated numbers, compromising the security of whatever cryptographic algorithm
relies on What is the potential impact?It is crucial to understand that the strength of cryptographic algorithms heavily relies on the quality of the random numbers used. By improperly
seeding the Insecure cryptographic keysOne of the primary use cases for the
Session hijacking and man-in-the-middle attackAnother scenario where this vulnerability can be exploited is in the generation of session tokens or nonces for secure communication protocols. If an attacker can predict the seed used to generate these tokens, they can impersonate legitimate users or intercept sensitive information. How to fix it in Java SECode examplesThe following code uses a cryptographically strong random number generator to generate data that is not cryptographically strong. Noncompliant code exampleimport java.security.SecureRandom val sr = SecureRandom() sr.setSeed(123456L) // Noncompliant val v = sr.nextInt() import java.security.SecureRandom val sr = SecureRandom("abcdefghijklmnop".toByteArray(charset("us-ascii"))) // Noncompliant val v = sr.nextInt() Compliant solutionimport java.security.SecureRandom val sr = SecureRandom() val v = sr.nextInt() This solution is available for JDK 1.8 and higher. import java.security.SecureRandom val sr = SecureRandom.getInstanceStrong() val v = sr.nextInt() How does this work?When the randomly generated data needs to be cryptographically strong, To go the extra mile, If the randomly generated data is not used for cryptographic purposes and is not business critical, it may be a better choice to use
ResourcesDocumentation
Standards
|
||||||||||||
kotlin:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on applications distributed to end users. Sensitive Code ExampleWebView.setWebContentsDebuggingEnabled(true) for Android enables debugging support: import android.webkit.WebView WebView.setWebContentsDebuggingEnabled(true) // Sensitive Compliant SolutionWebView.setWebContentsDebuggingEnabled(false) for Android disables debugging support: import android.webkit.WebView WebView.setWebContentsDebuggingEnabled(false) See
|
||||||||||||
kotlin:S5322 |
Android applications can receive broadcasts from the system or other applications. Receiving intents is security-sensitive. For example, it has led in the past to the following vulnerabilities: Receivers can be declared in the manifest or in the code to make them context-specific. If the receiver is declared in the manifest Android will start the application if it is not already running once a matching broadcast is received. The receiver is an entry point into the application. Other applications can send potentially malicious broadcasts, so it is important to consider broadcasts as untrusted and to limit the applications that can send broadcasts to the receiver. Permissions can be specified to restrict broadcasts to authorized applications. Restrictions can be enforced by both the sender and receiver of a broadcast. If permissions are specified when registering a broadcast receiver, then only broadcasters who were granted this permission can send a message to the receiver. This rule raises an issue when a receiver is registered without specifying any broadcast permission. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesRestrict the access to broadcasted intents. See the Android documentation for more information. Sensitive Code Exampleimport android.content.BroadcastReceiver import android.content.Context import android.content.IntentFilter import android.os.Build import android.os.Handler import androidx.annotation.RequiresApi class MyIntentReceiver { @RequiresApi(api = Build.VERSION_CODES.O) fun register( context: Context, receiver: BroadcastReceiver?, filter: IntentFilter?, scheduler: Handler?, flags: Int ) { context.registerReceiver(receiver, filter) // Sensitive context.registerReceiver(receiver, filter, flags) // Sensitive // Broadcasting intent with "null" for broadcastPermission context.registerReceiver(receiver, filter, null, scheduler) // Sensitive context.registerReceiver(receiver, filter, null, scheduler, flags) // Sensitive } } Compliant Solutionimport android.content.BroadcastReceiver import android.content.Context import android.content.IntentFilter import android.os.Build import android.os.Handler import androidx.annotation.RequiresApi class MyIntentReceiver { @RequiresApi(api = Build.VERSION_CODES.O) fun register( context: Context, receiver: BroadcastReceiver?, filter: IntentFilter?, broadcastPermission: String?, scheduler: Handler?, flags: Int ) { context.registerReceiver(receiver, filter, broadcastPermission, scheduler) context.registerReceiver(receiver, filter, broadcastPermission, scheduler, flags) } } See
|
||||||||||||
kotlin:S6362 |
WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered. In the context of a WebView, JavaScript code can exfiltrate local files that might be sensitive or even worse, access exposed functions of the application that can result in more severe vulnerabilities such as code injection. Thus JavaScript support should not be enabled for WebViews unless it is absolutely necessary and the authenticity of the web resources can be guaranteed. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to disable JavaScript support for WebViews unless it is necessary to execute JavaScript code. Only trusted pages should be rendered. Sensitive Code Exampleimport android.webkit.WebView val webView: WebView = findViewById(R.id.webview) webView.getSettings().setJavaScriptEnabled(true) // Sensitive Compliant Solutionimport android.webkit.WebView val webView: WebView = findViewById(R.id.webview) webView.getSettings().setJavaScriptEnabled(false) See
|
||||||||||||
kotlin:S6363 |
WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered. If malicious JavaScript code in a WebView is executed this can leak the contents of sensitive files when access to local files is enabled. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to disable access to local files for WebViews unless it is necessary. In the case of a successful attack through a Cross-Site Scripting vulnerability the attackers attack surface decreases drastically if no files can be read out. Sensitive Code Exampleimport android.webkit.WebView val webView: WebView = findViewById(R.id.webview) webView.getSettings().setAllowContentAccess(true) // Sensitive webView.getSettings().setAllowFileAccess(true) // Sensitive Compliant Solutionimport android.webkit.WebView val webView: WebView = findViewById(R.id.webview) webView.getSettings().setAllowContentAccess(false) webView.getSettings().setAllowFileAccess(false) See
|
||||||||||||
kotlin:S2053 |
This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes. Why is this an issue?During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords. However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital. What is the potential impact?Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need. Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster. If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once. A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before. With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred. How to fix it in Java SECode examplesThe following code contains examples of hard-coded salts. Noncompliant code exampleimport javax.crypto.spec.PBEParameterSpec fun hash() { val salt = "salty".toByteArray() val cipherSpec = PBEParameterSpec(salt, 10000) // Noncompliant } Compliant solutionimport java.security.SecureRandom import javax.crypto.spec.PBEParameterSpec fun hash() { val random = SecureRandom() val salt = ByteArray(16) random.nextBytes(salt) val cipherSpec = PBEParameterSpec(salt, 10000) } How does this work?This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 16 bytes (128 bits), as recommended by industry standards. Here, the compliant code example ensures the salt is random and has a sufficient length by calling the ResourcesStandards |
||||||||||||
kotlin:S5320 |
In Android applications, broadcasting intents is security-sensitive. For example, it has led in the past to the following vulnerability: By default, broadcasted intents are visible to every application, exposing all sensitive information they contain. This rule raises an issue when an intent is broadcasted without specifying any "receiver permission". Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesRestrict the access to broadcasted intents. See Android documentation for more information. Sensitive Code Exampleimport android.content.BroadcastReceiver import android.content.Context import android.content.Intent import android.os.Bundle import android.os.Handler import android.os.UserHandle public class MyIntentBroadcast { fun broadcast(intent: Intent, context: Context, user: UserHandle, resultReceiver: BroadcastReceiver, scheduler: Handler, initialCode: Int, initialData: String, initialExtras: Bundle, broadcastPermission: String) { context.sendBroadcast(intent) // Sensitive context.sendBroadcastAsUser(intent, user) // Sensitive // Broadcasting intent with "null" for receiverPermission context.sendBroadcast(intent, null) // Sensitive context.sendBroadcastAsUser(intent, user, null) // Sensitive context.sendOrderedBroadcast(intent, null) // Sensitive context.sendOrderedBroadcastAsUser(intent, user, null, resultReceiver, scheduler, initialCode, initialData, initialExtras) // Sensitive } } Compliant Solutionimport android.content.BroadcastReceiver import android.content.Context import android.content.Intent import android.os.Bundle import android.os.Handler import android.os.UserHandle public class MyIntentBroadcast { fun broadcast(intent: Intent, context: Context, user: UserHandle, resultReceiver: BroadcastReceiver, scheduler: Handler, initialCode: Int, initialData: String, initialExtras: Bundle, broadcastPermission: String) { context.sendBroadcast(intent, broadcastPermission) context.sendBroadcastAsUser(intent, user, broadcastPermission) context.sendOrderedBroadcast(intent, broadcastPermission) context.sendOrderedBroadcastAsUser(intent, user,broadcastPermission, resultReceiver, scheduler, initialCode, initialData, initialExtras) } } See
|
||||||||||||
kotlin:S5324 |
Storing data locally is a common task for mobile applications. Such data includes files among other things. One convenient way to store files is to use the external file storage which usually offers a larger amount of disc space compared to internal storage. Files created on the external storage are globally readable and writable. Therefore, a malicious application having the permissions
External storage can also be removed by the user (e.g when based on SD card) making the files unavailable to the application. Ask Yourself WhetherYour application uses external storage to:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleimport android.content.Context class AccessExternalFiles { fun accessFiles(Context context) { context.getExternalFilesDir(null) // Sensitive } } Compliant Solutionimport android.content.Context import android.os.Environment class AccessExternalFiles { fun accessFiles(Context context) { context.getFilesDir() } } See
|
||||||||||||
kotlin:S5542 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB (Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms. And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in Java Cryptographic ExtensionCode examplesNoncompliant code exampleExample with a symmetric cipher, AES: import javax.crypto.Cipher import javax.crypto.NoSuchPaddingException import java.security.NoSuchAlgorithmException fun main(args: Array<String>) { try { val aes = Cipher.getInstance("AES/CBC/PKCS5Padding"); // Noncompliant } catch (e: NoSuchAlgorithmException) { // ... } catch (e: NoSuchPaddingException) { // ... } } Example with an asymmetric cipher, RSA: import javax.crypto.Cipher import javax.crypto.NoSuchPaddingException import java.security.NoSuchAlgorithmException fun main(args: Array<String>) { try { val rsa = Cipher.getInstance("RSA/None/NoPadding"); // Noncompliant } catch (e: NoSuchAlgorithmException) { // ... } catch (e: NoSuchPaddingException) { // ... } } Compliant solutionFor the AES symmetric cipher, use the GCM mode: import javax.crypto.Cipher import javax.crypto.NoSuchPaddingException import java.security.NoSuchAlgorithmException fun main(args: Array<String>) { try { val aes = Cipher.getInstance("AES/GCM/NoPadding"); } catch (e: NoSuchAlgorithmException) { // ... } catch (e: NoSuchPaddingException) { // ... } } For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP): import javax.crypto.Cipher import javax.crypto.NoSuchPaddingException import java.security.NoSuchAlgorithmException fun main(args: Array<String>) { try { val rsa = Cipher.getInstance("RSA/ECB/OAEPWITHSHA-256ANDMGF1PADDING"); } catch (e: NoSuchAlgorithmException) { // ... } catch (e: NoSuchPaddingException) { // ... } } How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. Appropriate choices are currently the following. For AES: Use Galois/Counter mode (GCM)GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data. Other similar modes are:
It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead. For RSA: use the OAEP schemeThe Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA. ResourcesArticles & blog posts
Standards
|
||||||||||||
kotlin:S5547 |
This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in Java Cryptographic ExtensionCode examplesThe following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided. Noncompliant code exampleimport javax.crypto.NoSuchPaddingException import java.security.NoSuchAlgorithmException import javax.crypto.Cipher fun main(args: Array<String>) { try { val des = Cipher.getInstance("DES") // Noncompliant } catch (e: NoSuchAlgorithmException) { // ... } catch (e: NoSuchPaddingException) { // ... } } Compliant solutionimport javax.crypto.NoSuchPaddingException import java.security.NoSuchAlgorithmException import javax.crypto.Cipher fun main(args: Array<String>) { try { val aes = Cipher.getInstance("AES/GCM/NoPadding") } catch (e: NoSuchAlgorithmException) { // ... } catch (e: NoSuchPaddingException) { // ... } } How does this work?Use a secure algorithmIt is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES). For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits. ResourcesStandards
|
||||||||||||
kotlin:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code Exampleval ip = "192.168.12.42" val socket = ServerSocket(ip, 6667) Compliant Solutionval ip = System.getenv("myapplication.ip") val socket = ServerSocket(ip, 6667) ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
kotlin:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Java Cryptographic ExtensionCode examplesNoncompliant code exampleimport javax.net.ssl.SSLContext; import java.security.NoSuchAlgorithmException; fun main(args: Array<String>) { try { SSLContext.getInstance("TLSv1.1"); // Noncompliant } catch (e: NoSuchAlgorithmException) { // ... } } Compliant solutionimport javax.net.ssl.SSLContext; import java.security.NoSuchAlgorithmException; fun main(args: Array<String>) { try { SSLContext.getInstance("TLSv1.2"); } catch (e: NoSuchAlgorithmException) { // ... } } How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards
|
||||||||||||
kotlin:S2245 |
Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities: When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleval random = Random() // Noncompliant: Random() is not a secure random number generaotr val bytes = ByteArray(20) random.nextBytes(bytes) Compliant Solutionval random = SecureRandom() // Compliant val bytes = ByteArray(20) random.nextBytes(bytes) See
|
||||||||||||
kotlin:S4426 |
This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms. Note that depending on the algorithm, the term key refers to a different mathematical property. For example:
If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext. In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Java Cryptographic ExtensionCode examplesThe following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm. Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm. Noncompliant code exampleHere is an example of a private key generation with RSA: import java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; fun main(args: Array<String>) { try { val keyPairGenerator = KeyPairGenerator.getInstance("RSA"); keyPairGenerator.initialize(1024); // Noncompliant } catch (e: NoSuchAlgorithmException) { // ... } } Here is an example of a private key generation with AES: import java.security.KeyGenerator; import java.security.NoSuchAlgorithmException; fun main(args: Array<String>) { try { val keyGenerator = KeyGenerator.getInstance("AES"); keyGenerator.initialize(64); // Noncompliant } catch (e: NoSuchAlgorithmException) { // ... } } Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name: import java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; import java.security.InvalidAlgorithmParameterException; import java.security.spec.ECGenParameterSpec; fun main(args: Array<String>) { try { val keyPairGenerator = KeyPairGenerator.getInstance("EC"); val ellipticCurveName = new ECGenParameterSpec("secp112r1"); // Noncompliant keyPairGenerator.initialize(ellipticCurveName); } catch (e: NoSuchAlgorithmException) { // ... } catch (e: InvalidAlgorithmParameterException) { // ... } } Compliant solutionimport java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; fun main(args: Array<String>) { try { val keyPairGenerator = KeyPairGenerator.getInstance("RSA"); keyPairGenerator.initialize(2048); } catch (e: NoSuchAlgorithmException) { // ... } } import java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; public static void main(String[] args) { try { val keyPairGenerator = KeyPairGenerator.getInstance("AES"); keyPairGenerator.initialize(128); } catch (e: NoSuchAlgorithmException) { // ... } } import java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; import java.security.InvalidAlgorithmParameterException; import java.security.spec.ECGenParameterSpec; public static void main(String[] args) { try { val keyPairGenerator = KeyPairGenerator.getInstance("EC"); val ellipticCurveName = new ECGenParameterSpec("secp256r1"); keyPairGenerator.initialize(ellipticCurveName); } catch (e: NoSuchAlgorithmException) { // ... } catch (e: InvalidAlgorithmParameterException) { // ... } } How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The appropriate choices are the following. RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem. In general, a minimum key size of 2048 bits is recommended for both. AES (Advanced Encryption Standard)AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying
all possible keys. Currently, a minimum key size of 128 bits is recommended for AES. Elliptic Curve Cryptography (ECC)Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve
algorithms are mentioned directly in their names. For example, Currently, a minimum key size of 224 bits is recommended for EC algorithms. Going the extra milePre-Quantum CryptographyEncrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer. Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety. ResourcesArticles & blog posts
Standards
|
||||||||||||
kotlin:S4830 |
This vulnerability makes it possible that an encrypted communication is intercepted. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security. When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. Loss of data integrityWhen TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system. How to fix it in Java Cryptographic ExtensionCode examplesThe following code contains examples of disabled certificate validation. The certificate validation gets disabled by overriding Noncompliant code exampleval trustAllCerts = arrayOf<TrustManager>(object : X509TrustManager { @Throws(CertificateException::class) override fun checkClientTrusted(chain: Array<java.security.cert.X509Certificate>, authType: String) { } // Noncompliant @Throws(CertificateException::class) override fun checkServerTrusted(chain: Array<java.security.cert.X509Certificate>, authType: String) { } // Noncompliant override fun getAcceptedIssuers(): Array<java.security.cert.X509Certificate> { return arrayOf() } }) How does this work?Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation. To avoid running into problems with invalid certificates, consider the following sections. Using trusted certificatesIf possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration. Working with self-signed certificates or non-standard CAsIn some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store. Here is a sample command to import a certificate to the Java trust store: keytool -import -alias myserver -file myserver.crt -keystore cacerts ResourcesStandards
|
||||||||||||
kotlin:S5527 |
This vulnerability allows attackers to impersonate a trusted host. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security. When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. To do so, an attacker would obtain a valid certificate authenticating What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. How to fix it in OkHttpCode examplesThe following code contains examples of disabled hostname validation. The hostname validation gets disabled by overriding Noncompliant code exampleimport javax.net.ssl.HttpsURLConnection import javax.net.ssl.SSLSession import javax.net.ssl.HostnameVerifier import okhttp3.OkHttpClient import okhttp3.Request import okhttp3.Response fun request() { val builder = OkHttpClient.Builder() builder.hostnameVerifier(object : HostnameVerifier { override fun verify(hostname: String?, session: SSLSession?): Boolean { // Noncompliant return true } }) OkHttpClient client = builder.build() Request request = new Request.Builder() .url("https://example.com") .build() Response response = client.newCall(request).execute() } Compliant solutionimport javax.net.ssl.HttpsURLConnection import javax.net.ssl.SSLSession import javax.net.ssl.HostnameVerifier import okhttp3.OkHttpClient import okhttp3.Request import okhttp3.Response fun request() { val builder = OkHttpClient.Builder() OkHttpClient client = builder.build() Request request = new Request.Builder() .url("https://example.com") .build() Response response = client.newCall(request).execute() } How does this work?To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate. Use valid certificatesIf a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues. Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself. In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:
Here is a sample command to import a certificate to the Java trust store: keytool -import -alias myserver -file myserver.crt -keystore cacerts ResourcesStandards
|
||||||||||||
kotlin:S6288 |
Android KeyStore is a secure container for storing key materials, in particular it prevents key materials extraction, i.e. when the application process is compromised, the attacker cannot extract keys but may still be able to use them. It’s possible to enable an Android security feature, user authentication, to restrict usage of keys to only authenticated users. The lock screen has to be unlocked with defined credentials (pattern/PIN/password, biometric). Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enable user authentication (by setting Sensitive Code ExampleAny users can use the key: val keyGenerator: KeyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore") var builder: KeyGenParameterSpec = KeyGenParameterSpec.Builder("test_secret_key", KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT) // Noncompliant .setBlockModes(KeyProperties.BLOCK_MODE_GCM) .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE) .build() keyGenerator.init(builder) Compliant SolutionThe use of the key is limited to authenticated users (for a duration of time defined to 60 seconds): val keyGenerator: KeyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore") var builder: KeyGenParameterSpec = KeyGenParameterSpec.Builder("test_secret_key", KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT) .setBlockModes(KeyProperties.BLOCK_MODE_GCM) .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE) .setUserAuthenticationRequired(true) // Compliant .setUserAuthenticationParameters (60, KeyProperties.AUTH_DEVICE_CREDENTIAL) .build() keyGenerator.init(builder) See
|
||||||||||||
kotlin:S4790 |
The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160. The following APIs are tracked for use of obsolete crypto algorithms:
Ask Yourself WhetherThe hashed value is used in a security context like:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSafer alternatives, such as Sensitive Code Exampleval md1: MessageDigest = MessageDigest.getInstance("SHA"); // Sensitive: SHA is not a standard name, for most security providers it's an alias of SHA-1 val md2: MessageDigest = MessageDigest.getInstance("SHA1"); // Sensitive Compliant Solutionval md1: MessageDigest = MessageDigest.getInstance("SHA-512"); // Compliant See
|
||||||||||||
kotlin:S6291 |
Storing data locally is a common task for mobile applications. Such data includes preferences or authentication tokens for external services, among other things. There are many convenient solutions that allow storing data persistently, for example SQLiteDatabase, SharedPreferences, and Realm. By default these systems store the data unencrypted, thus an attacker with physical access to the device can read them out easily. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to password-encrypt local databases that contain sensitive information. Most systems provide secure alternatives to plain-text storage that should be used. If no secure alternative is available the data can also be encrypted manually before it is stored. The encryption password should not be hard-coded in the application. There are different approaches how the password can be provided to encrypt and
decrypt the database. In the case of Sensitive Code ExampleFor SQLiteDatabase: var db = activity.openOrCreateDatabase("test.db", Context.MODE_PRIVATE, null) // Sensitive For SharedPreferences: val pref = activity.getPreferences(Context.MODE_PRIVATE) // Sensitive For Realm: val config = RealmConfiguration.Builder().build() val realm = Realm.getInstance(config) // Sensitive Compliant SolutionInstead of SQLiteDatabase you can use SQLCipher: val db = SQLiteDatabase.openOrCreateDatabase("test.db", getKey(), null) Instead of SharedPreferences you can use EncryptedSharedPreferences: val masterKeyAlias = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC) EncryptedSharedPreferences.create( "secret", masterKeyAlias, context, EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV, EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM ) For Realm an encryption key can be specified in the config: val config = RealmConfiguration.Builder() .encryptionKey(getKey()) .build() val realm = Realm.getInstance(config) See
|
||||||||||||
kotlin:S6293 |
Android comes with Android KeyStore, a secure container for storing key materials. It’s possible to define certain keys to be unlocked when users authenticate using biometric credentials. This way, even if the application process is compromised, the attacker cannot access keys, as presence of the authorized user is required. These keys can be used, to encrypt, sign or create a message authentication code (MAC) as proof that the authentication result has not been
tampered with. This protection defeats the scenario where an attacker with physical access to the device would try to hook into the application
process and call the Ask Yourself WhetherThe application contains:
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to tie the biometric authentication to a cryptographic operation by using a Sensitive Code ExampleA // ... val biometricPrompt: BiometricPrompt = BiometricPrompt(activity, executor, callback) // ... biometricPrompt.authenticate(promptInfo) // Noncompliant Compliant SolutionA // ... val biometricPrompt: BiometricPrompt = BiometricPrompt(activity, executor, callback) // ... biometricPrompt.authenticate(promptInfo, BiometricPrompt.CryptoObject(cipher)) // Compliant See
|
||||||||||||
go:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code Examplevar ( ip = "192.168.12.42" port = 3333 ) SocketClient(ip, port) Compliant Solutionconfig, err := ReadConfig("properties.ini") ip := config["ip"] port := config["ip"] SocketClient(ip, port) ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
go:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplefunc connect() { user := "root" password:= "supersecret" // Sensitive url := "login=" + user + "&passwd=" + password } Compliant Solutionfunc connect() { user := getEncryptedUser() password:= getEncryptedPass() // Compliant url := "login=" + user + "&passwd=" + password } See
|
||||||||||||
python:S2115 |
When accessing a database, an empty password should be avoided as it introduces a weakness. Why is this an issue?When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials. What is the potential impact?Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains. Unauthorized Access to Sensitive DataWhen a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage. Compromise of System IntegrityWithout a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks. Unwanted Modifications or DeletionsThe absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences. Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm. How to fix it in MySQL Connector/PythonCode examplesThe following code uses an empty password to connect to a MySQL database. The vulnerability can be fixed by using a strong password retrieved from an environment variable Noncompliant code examplefrom mysql.connector import connection connection.MySQLConnection(host='localhost', user='sonarsource', password='') # Noncompliant Compliant solutionfrom mysql.connector import connection import os db_password = os.getenv('DB_PASSWORD') connection.MySQLConnection(host='localhost', user='sonarsource', password=db_password) PitfallsHard-coded passwordsIt could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:
To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase. ResourcesStandards |
||||||||||||
python:S3329 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV). If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, a company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in pycaCode examplesNoncompliant code examplefrom cryptography.hazmat.primitives.ciphers import ( Cipher, algorithms, modes, ) iv = "doNotTryThis@Home2023" cipher = Cipher(algorithms.AES(key), modes.CBC(iv)) cipher.encryptor() # Noncompliant Compliant solutionIn this example, the code explicitly uses a number generator that is considered strong. from os import urandom from cryptography.hazmat.primitives.ciphers import ( Cipher, algorithms, modes, ) iv = urandom(16) cipher = Cipher(algorithms.AES(key), modes.CBC(iv)) cipher.encryptor() How does this work?Use unique IVsTo ensure high security, initialization vectors must meet two important criteria:
The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext. In the previous non-compliant example, the problem is not that the IV is hard-coded. ResourcesStandards
|
||||||||||||
python:S4502 |
A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application. The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFor a Django application, the code is sensitive when,
MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] # Sensitive: django.middleware.csrf.CsrfViewMiddleware is missing
@csrf_exempt # Sensitive def example(request): return HttpResponse("default") For a Flask application, the code is sensitive when,
app = Flask(__name__) app.config['WTF_CSRF_ENABLED'] = False # Sensitive
app = Flask(__name__) # Sensitive: CSRFProtect is missing @app.route('/') def hello_world(): return 'Hello, World!'
app = Flask(__name__) csrf = CSRFProtect() csrf.init_app(app) @app.route('/example/', methods=['POST']) @csrf.exempt # Sensitive def example(): return 'example '
class unprotectedForm(FlaskForm): class Meta: csrf = False # Sensitive name = TextField('name') submit = SubmitField('submit') Compliant SolutionFor a Django application,
MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', # Compliant 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ]
def example(request): # Compliant return HttpResponse("default") For a Flask application,
app = Flask(__name__) csrf = CSRFProtect() csrf.init_app(app) # Compliant
@app.route('/example/', methods=['POST']) # Compliant def example(): return 'example ' class unprotectedForm(FlaskForm): class Meta: csrf = True # Compliant name = TextField('name') submit = SubmitField('submit') See |
||||||||||||
python:S5852 |
Most of the regular expression engines use This rule determines the runtime complexity of a regular expression and informs you of the complexity if it is not linear. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesTo avoid In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can
fail, causing the backtracking to actually happen. Note that when performing a full match (e.g. using
In order to rewrite your regular expression without these patterns, consider the following strategies:
Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when using partial matches, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:
See
|
||||||||||||
python:S6245 |
This rule is deprecated, and will eventually be removed. Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself. There are three SSE options:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys. Sensitive Code ExampleServer-side encryption is not used: bucket = s3.Bucket(self,"bucket", encryption=s3.BucketEncryption.UNENCRYPTED # Sensitive ) The default value of Compliant SolutionServer-side encryption with Amazon S3-Managed Keys is used: bucket = s3.Bucket(self,"bucket", encryption=s3.BucketEncryption.S3_MANAGED ) # Alternatively with a KMS key managed by the user. bucket = s3.Bucket(self,"bucket", encryptionKey=access_key ) See
|
||||||||||||
python:S6265 |
Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users. The following canned ACLs are security-sensitive:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege policy, i.e., to grant necessary permissions only to users for their required tasks. In the
context of canned ACL, set it to Sensitive Code ExampleAll users (ie: anyone in the world authenticated or not) have read and write permissions with the bucket = s3.Bucket(self, "bucket", access_control=s3.BucketAccessControl.PUBLIC_READ_WRITE # Sensitive ) s3deploy.BucketDeployment(self, "DeployWebsite", access_control=s3.BucketAccessControl.PUBLIC_READ_WRITE # Sensitive ) Compliant SolutionWith the bucket = s3.Bucket(self, "bucket", access_control=s3.BucketAccessControl.PRIVATE # Compliant ) # Another example s3deploy.BucketDeployment(self, "DeployWebsite", access_control=s3.BucketAccessControl.PRIVATE # Compliant ) See
|
||||||||||||
python:S6270 |
Resource-based policies granting access to all users can lead to information leakage. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges. Sensitive Code ExampleThis policy allows all users, including anonymous ones, to access an S3 bucket: from aws_cdk.aws_iam import PolicyStatement, AnyPrincipal, Effect from aws_cdk.aws_s3 import Bucket bucket = Bucket(self, "ExampleBucket") bucket.add_to_resource_policy(PolicyStatement( effect=Effect.ALLOW, actions=["s3:*"], resources=[bucket.arn_for_objects("*")], principals=[AnyPrincipal()] # Sensitive )) Compliant SolutionThis policy allows only the authorized users: from aws_cdk.aws_iam import PolicyStatement, AccountRootPrincipal, Effect from aws_cdk.aws_s3 import Bucket bucket = Bucket(self, "ExampleBucket") bucket.add_to_resource_policy(PolicyStatement( effect=Effect.ALLOW, actions=["s3:*"], resources=[bucket.arn_for_objects("*")], principals=[AccountRootPrincipal()] )) See
|
||||||||||||
python:S6275 |
Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade. Sensitive Code Examplefrom aws_cdk.aws_ec2 import Volume class EBSVolumeStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) Volume(self, "unencrypted-explicit", availability_zone="eu-west-1a", size=Size.gibibytes(1), encrypted=False # Sensitive ) from aws_cdk.aws_ec2 import Volume class EBSVolumeStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) Volume(self, "unencrypted-implicit", availability_zone="eu-west-1a", size=Size.gibibytes(1) ) # Sensitive as encryption is disabled by default Compliant Solutionfrom aws_cdk.aws_ec2 import Volume class EBSVolumeStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) Volume(self, "encrypted-explicit", availability_zone="eu-west-1a", size=Size.gibibytes(1), encrypted=True ) See |
||||||||||||
python:S2245 |
Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities: When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleimport random random.getrandbits(1) # Sensitive random.randint(0,9) # Sensitive random.random() # Sensitive # the following functions are sadly used to generate salt by selecting characters in a string ex: "abcdefghijk"... random.sample(['a', 'b'], 1) # Sensitive random.choice(['a', 'b']) # Sensitive random.choices(['a', 'b']) # Sensitive See
|
||||||||||||
python:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Python Standard LibraryCode examplesNoncompliant code exampleimport ssl ssl.SSLContext(ssl.PROTOCOL_SSLv3) # Noncompliant Compliant solutionimport ssl context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) context.minimum_version = ssl.TLSVersion.TLSv1_2 How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards
|
||||||||||||
python:S4426 |
This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms. Note that depending on the algorithm, the term key refers to a different mathematical property. For example:
If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext. In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in pycaCode examplesThe following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm. Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm. Noncompliant code exampleHere is an example of a private key generation with RSA: from cryptography.hazmat.primitives.asymmetric import rsa from cryptography.hazmat.backends import default_backend backend = default_backend() private_key = rsa.generate_private_key(key_size = 1024, backend = backend) # Noncompliant public_key = private_key.public_key() Here is an example of a key generation with the Digital Signature Algorithm (DSA): from cryptography.hazmat.primitives.asymmetric import dsa from cryptography.hazmat.backends import default_backend backend = default_backend() private_key = dsa.generate_private_key(key_size = 1024, backend = backend) # Noncompliant public_key = private_key.public_key() Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name: from cryptography.hazmat.primitives.asymmetric import ec from cryptography.hazmat.backends import default_backend backend = default_backend() private_key = ec.generate_private_key(curve=ec.SECT163R2(), backend=backend) # Noncompliant public_key = private_key.public_key() Compliant solutionfrom cryptography.hazmat.primitives.asymmetric import rsa from cryptography.hazmat.backends import default_backend backend = default_backend() private_key = rsa.generate_private_key(key_size = 3072, backend = backend) public_key = private_key.public_key() from cryptography.hazmat.primitives.asymmetric import dsa from cryptography.hazmat.backends import default_backend backend = default_backend() private_key = dsa.generate_private_key(key_size = 3072, backend = backend) public_key = private_key.public_key() from cryptography.hazmat.primitives.asymmetric import ec from cryptography.hazmat.backends import default_backend backend = default_backend() private_key = ec.generate_private_key(curve=ec.SECP521R1(), backend=backend) public_key = private_key.public_key() How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community. The appropriate choices are the following. RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem. In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible. AES (Advanced Encryption Standard)AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying
all possible keys. Currently, a minimum key size of 128 bits is recommended for AES. Elliptic Curve Cryptography (ECC)Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve
algorithms is mentioned directly in their names. For example, Currently, a minimum key size of 224 bits is recommended for EC-based algorithms. Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:
Going the extra milePre-Quantum CryptographyEncrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer. Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety. Resources
Articles & blog posts
Standards
|
||||||||||||
python:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on production servers or applications distributed to end users. Sensitive Code ExampleDjango application startup: from django.conf import settings settings.configure(DEBUG=True) # Sensitive when set to True settings.configure(DEBUG_PROPAGATE_EXCEPTIONS=True) # Sensitive when set to True def custom_config(config): settings.configure(default_settings=config, DEBUG=True) # Sensitive Inside DEBUG = True # Sensitive DEBUG_PROPAGATE_EXCEPTIONS = True # Sensitive Flask application startup: from flask import Flask app = Flask() app.debug = True # Sensitive app.run(debug=True) # Sensitive Compliant Solutionfrom django.conf import settings settings.configure(DEBUG=False) settings.configure(DEBUG_PROPAGATE_EXCEPTIONS=False) def custom_config(config): settings.configure(default_settings=config, DEBUG=False) DEBUG = False DEBUG_PROPAGATE_EXCEPTIONS = False from flask import Flask app = Flask() app.debug = False app.run(debug=False) See |
||||||||||||
python:S4787 |
This rule is deprecated; use S4426, S5542, S5547 instead. Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities: Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption. This rule flags function calls that initiate encryption/decryption. Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example
from cryptography.fernet import Fernet from cryptography.hazmat.primitives.ciphers.aead import ChaCha20Poly1305, AESGCM, AESCCM from cryptography.hazmat.primitives.asymmetric import rsa from cryptography.hazmat.primitives.ciphers import Cipher def encrypt(key): Fernet(key) # Sensitive ChaCha20Poly1305(key) # Sensitive AESGCM(key) # Sensitive AESCCM(key) # Sensitive private_key = rsa.generate_private_key() # Sensitive def encrypt2(algorithm, mode, backend): Cipher(algorithm, mode, backend) # Sensitive
from nacl.public import Box from nacl.secret import SecretBox def public_encrypt(secret_key, public_key): Box(secret_key, public_key) # Sensitive def secret_encrypt(key): SecretBox(key) # Sensitive See
|
||||||||||||
python:S5042 |
Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes). Ask Yourself WhetherArchives to expand are untrusted and:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFor tarfile module: import tarfile tfile = tarfile.open("TarBomb.tar") tfile.extractall('./tmp/') # Sensitive tfile.close() For zipfile module: import zipfile zfile = zipfile.ZipFile('ZipBomb.zip', 'r') zfile.extractall('./tmp/') # Sensitive zfile.close() Compliant SolutionFor tarfile module: import tarfile THRESHOLD_ENTRIES = 10000 THRESHOLD_SIZE = 1000000000 THRESHOLD_RATIO = 10 totalSizeArchive = 0; totalEntryArchive = 0; tfile = tarfile.open("TarBomb.tar") for entry in tfile: tarinfo = tfile.extractfile(entry) totalEntryArchive += 1 sizeEntry = 0 result = b'' while True: sizeEntry += 1024 totalSizeArchive += 1024 ratio = sizeEntry / entry.size if ratio > THRESHOLD_RATIO: # ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack break chunk = tarinfo.read(1024) if not chunk: break result += chunk if totalEntryArchive > THRESHOLD_ENTRIES: # too much entries in this archive, can lead to inodes exhaustion of the system break if totalSizeArchive > THRESHOLD_SIZE: # the uncompressed data size is too much for the application resource capacity break tfile.close() For zipfile module: import zipfile THRESHOLD_ENTRIES = 10000 THRESHOLD_SIZE = 1000000000 THRESHOLD_RATIO = 10 totalSizeArchive = 0; totalEntryArchive = 0; zfile = zipfile.ZipFile('ZipBomb.zip', 'r') for zinfo in zfile.infolist(): print('File', zinfo.filename) data = zfile.read(zinfo) totalEntryArchive += 1 totalSizeArchive = totalSizeArchive + len(data) ratio = len(data) / zinfo.compress_size if ratio > THRESHOLD_RATIO: # ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack break if totalSizeArchive > THRESHOLD_SIZE: # the uncompressed data size is too much for the application resource capacity break if totalEntryArchive > THRESHOLD_ENTRIES: # too much entries in this archive, can lead to inodes exhaustion of the system break zfile.close() See
|
||||||||||||
python:S5300 |
This rule is deprecated, and will eventually be removed. Sending emails is security-sensitive and can expose an application to a large range of vulnerabilities. Information Exposure Emails often contain sensitive information which might be exposed to an attacker if he can add an arbitrary address to the recipient list. Spamming / Phishing Malicious user can abuse email based feature to send spam or phishing content. Dangerous Content Injection Emails can contain HTML and JavaScript code, thus they can be used for XSS attacks. Email Headers Injection Email fields such as In the past, it has led to the following vulnerabilities: Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplesmtplib import smtplib def send(from_email, to_email, msg): server = smtplib.SMTP('localhost', 1025) server.sendmail(from_email, to_email, msg) # Sensitive Django from django.core.mail import send_mail def send(subject, msg, from_email, to_email): send_mail(subject, msg, from_email, [to_email]) # Sensitive Flask-Mail from flask import Flask from flask_mail import Mail, Message app = Flask(__name__) def send(subject, msg, from_email, to_email): mail = Mail(app) msg = Message(subject, [to_email], body, sender=from_email) mail.send(msg) # Sensitive{code} See |
||||||||||||
python:S5542 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext. Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution. For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in PyCryptoCode examplesNoncompliant code exampleExample with a symmetric cipher, AES: from Crypto.Cipher import AES AES.new(key, AES.MODE_ECB) # Noncompliant Example with an asymmetric cipher, RSA: from Crypto.Cipher import PKCS1_v1_5 PKCS1_v1_5.new(key) # Noncompliant Compliant solutionSince PyCrypto is not supported anymore, another library should be used. In the current context, Cryptodome uses a similar API. For the AES symmetric cipher, use the GCM mode: from Crypto.Cipher import AES AES.new(key, AES.MODE_GCM) For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP): from Crypto.Cipher import PKCS1_OAEP PKCS1_OAEP.new(key) How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. Appropriate choices are currently the following. For AES: use authenticated encryption modesThe best-known authenticated encryption mode for AES is Galois/Counter mode (GCM). GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data. Other similar modes are:
It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead. For RSA: use the OAEP schemeThe Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA. ResourcesArticles & blog posts
Standards |
||||||||||||
python:S5547 |
This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in CryptodomeCode examplesThe following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided. Noncompliant code examplefrom Crypto.Cipher import DES # pycryptodome from Cryptodome.Cipher import DES # pycryptodomex cipher = DES.new(key, DES.MODE_OFB) # Noncompliant Compliant solutionfrom Crypto.Cipher import AES # pycryptodome from Cryptodome.Cipher import AES # pycryptodomex cipher = AES.new(key, AES.MODE_CCM) How does this work?Use a secure algorithmIt is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES). For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits. ResourcesStandards |
||||||||||||
python:S5659 |
This vulnerability allows forging of JSON Web Tokens to impersonate other users. Why is this an issue?JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature. What is the potential impact?When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities. Impersonation of usersJWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data. Unauthorized data accessWhen a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access. How to fix it in PyJWTCode examplesThe following code contains an example of JWT decoding without verification of the signature. Noncompliant code exampleimport jwt jwt.decode(token, options={"verify_signature":False}) # Noncompliant Compliant solutionBy default, verification is enabled for the method import jwt jwt.decode(token, key, algorithms="HS256") How does this work?Verify the signature of your tokensResolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose. Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked. To resolve the issue, follow these instructions:
By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process. Going the extra mileSecurely store your secret keysEnsure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services. Rotate your secret keysEven with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions. ResourcesStandards |
||||||||||||
python:S6252 |
S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket. It can lead to unintentional or intentional information loss. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object. Sensitive Code Examplebucket = s3.Bucket(self, "bucket", versioned=False # Sensitive ) The default value of Compliant Solutionbucket = s3.Bucket(self, "bucket", versioned=True ) See
|
||||||||||||
python:S2257 |
The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has
been protected. Standard algorithms like This rule tracks creation of Recommended Secure Coding Practices
Sensitive Code Exampleclass CustomPasswordHasher(BasePasswordHasher): # Sensitive # ... See |
||||||||||||
python:S3330 |
When a cookie is configured with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFlask: from flask import Response @app.route('/') def index(): response = Response() response.set_cookie('key', 'value') # Sensitive return response Compliant SolutionFlask: from flask import Response @app.route('/') def index(): response = Response() response.set_cookie('key', 'value', httponly=True) # Compliant return response See
|
||||||||||||
python:S4433 |
Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:
A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials. Why is this an issue?When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory. What is the potential impact?An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores. Authentication bypassIf attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider. In such a case, all users configured in the directory might see their identity and privileges taken over. Sensitive information leakIf attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information. Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider. If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law. How to fix itCode examplesThe following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism. Noncompliant code exampleimport ldap def init_ldap(): connect = ldap.initialize('ldap://example:1389') connect.simple_bind('cn=root') # Noncompliant connect.simple_bind_s('cn=root') # Noncompliant connect.bind_s('cn=root', None) # Noncompliant connect.bind('cn=root', None) # Noncompliant Compliant solutionimport ldap import os def init_ldap(): connect = ldap.initialize('ldap://example:1389') connect.simple_bind('cn=root', os.environ.get('LDAP_PASSWORD')) connect.simple_bind_s('cn=root', os.environ.get('LDAP_PASSWORD')) connect.bind_s('cn=root', os.environ.get('LDAP_PASSWORD')) connect.bind('cn=root', os.environ.get('LDAP_PASSWORD')) ResourcesDocumentation
Standards |
||||||||||||
python:S4784 |
This rule is deprecated; use S5852, S2631 instead. Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities: Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as
Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users. This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following
characters: Example: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesCheck whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using. Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2. Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection. Sensitive Code ExampleDjango from django.core.validators import RegexValidator from django.urls import re_path RegexValidator('(a*)*b') # Sensitive def define_http_endpoint(view): re_path(r'^(a*)*b/$', view) # Sensitive
import re from re import compile, match, search, fullmatch, split, findall, finditer, sub, subn input = 'input string' replacement = 'replacement' re.compile('(a*)*b') # Sensitive re.match('(a*)*b', input) # Sensitive re.search('(a*)*b', input) # Sensitive re.fullmatch('(a*)*b', input) # Sensitive re.split('(a*)*b', input) # Sensitive re.findall('(a*)*b', input) # Sensitive re.finditer('(a*)*b',input) # Sensitive re.sub('(a*)*b', replacement, input) # Sensitive re.subn('(a*)*b', replacement, input) # Sensitive
import regex from regex import compile, match, search, fullmatch, split, findall, finditer, sub, subn, subf, subfn, splititer input = 'input string' replacement = 'replacement' regex.subf('(a*)*b', replacement, input) # Sensitive regex.subfn('(a*)*b', replacement, input) # Sensitive regex.splititer('(a*)*b', input) # Sensitive regex.compile('(a*)*b') # Sensitive regex.match('(a*)*b', input) # Sensitive regex.search('(a*)*b', input) # Sensitive regex.fullmatch('(a*)*b', input) # Sensitive regex.split('(a*)*b', input) # Sensitive regex.findall('(a*)*b', input) # Sensitive regex.finditer('(a*)*b',input) # Sensitive regex.sub('(a*)*b', replacement, input) # Sensitive regex.subn('(a*)*b', replacement, input) # Sensitive ExceptionsSome corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: It is a good idea to test your regular expression if it has the same pattern on both side of a " See
|
||||||||||||
python:S4790 |
Cryptographic hash algorithms such as Ask Yourself WhetherThe hashed value is used in a security context like:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSafer alternatives, such as Sensitive Code Exampleimport hashlib m = hashlib.md5() // Sensitive import hashlib m = hashlib.sha1() // Sensitive import md5 // Sensitive and deprecated since Python 2.5; use the hashlib module instead. m = md5.new() import sha // Sensitive and deprecated since Python 2.5; use the hashlib module instead. m = sha.new() Compliant Solutionimport hashlib m = hashlib.sha512() // Compliant See
|
||||||||||||
python:S4792 |
This rule is deprecated, and will eventually be removed. Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities: Logs are useful before, during and after a security incident.
Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged. This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:
Sensitive Code Exampleimport logging from logging import Logger, Handler, Filter from logging.config import fileConfig, dictConfig logging.basicConfig() # Sensitive logging.disable() # Sensitive def update_logging(logger_class): logging.setLoggerClass(logger_class) # Sensitive def set_last_resort(last_resort): logging.lastResort = last_resort # Sensitive class CustomLogger(Logger): # Sensitive pass class CustomHandler(Handler): # Sensitive pass class CustomFilter(Filter): # Sensitive pass def update_config(path, config): fileConfig(path) # Sensitive dictConfig(config) # Sensitive See
|
||||||||||||
python:S5527 |
This vulnerability allows attackers to impersonate a trusted host. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security. When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. To do so, an attacker would obtain a valid certificate authenticating What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. How to fix it in Python Standard LibraryCode examplesThe following code contains examples of disabled hostname validation. Certificate validation is not enabled by default when Noncompliant code exampleimport ssl example = ssl._create_stdlib_context() # Noncompliant example = ssl._create_default_https_context() example.check_hostname = False # Noncompliant Compliant solutionimport ssl example = ssl.create_default_context() example = ssl._create_default_https_context() How does this work?To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate. Use valid certificatesIf a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues. Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself. In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:
ResourcesStandards
|
||||||||||||
python:S6281 |
By default S3 buckets are private, it means that only the bucket owner can access it. This access control can be relaxed with ACLs or policies. To prevent permissive policies to be set on a S3 bucket the following booleans settings can be enabled:
The other attribute However, all of those options can be enabled by setting the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to configure:
Sensitive Code ExampleBy default, when not set, the bucket = s3.Bucket(self, "bucket" # Sensitive ) This bucket = s3.Bucket(self, "bucket", block_public_access=s3.BlockPublicAccess( block_public_acls=False, # Sensitive ignore_public_acls=True, block_public_policy=True, restrict_public_buckets=True ) ) The attribute bucket = s3.Bucket(self, "bucket", block_public_access=s3.BlockPublicAccess.BLOCK_ACLS # Sensitive ) Compliant SolutionThis bucket = s3.Bucket(self, "bucket", block_public_access=s3.BlockPublicAccess.BLOCK_ALL # Compliant ) A similar configuration to the one above can obtained by setting all parameters of the bucket = s3.Bucket(self, "bucket", block_public_access=s3.BlockPublicAccess( # Compliant block_public_acls=True, ignore_public_acls=True, block_public_policy=True, restrict_public_buckets=True ) ) See
|
||||||||||||
python:S6304 |
A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur. Ask Yourself WhetherThe AWS account has more than one resource with different levels of sensitivity. A risk exists if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors. Sensitive Code ExampleThe wildcard from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement PolicyDocument( statements=[ PolicyStatement( effect=Effect.ALLOW, actions="iam:CreatePolicyVersion", resources=["*"] # Sensitive ) ] ) Compliant SolutionRestrict the update permission to the appropriate subset of policies: from aws_cdk import Aws from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement PolicyDocument( statements=[ PolicyStatement( effect=Effect.ALLOW, actions="iam:CreatePolicyVersion", resources=[f"arn:aws:iam::{Aws.ACCOUNT_ID}:policy/team1/*"] ) ] ) Exceptions
See
|
||||||||||||
python:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleusername = 'admin' password = 'admin' # Sensitive usernamePassword = 'user=admin&password=admin' # Sensitive Compliant Solutionimport os username = os.getenv("username") # Compliant password = os.getenv("password") # Compliant usernamePassword = 'user=%s&password=%s' % (username, password) # Compliant{code} See
|
||||||||||||
python:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code Exampleurl = "http://example.com" # Sensitive url = "ftp://anonymous@example.com" # Sensitive url = "telnet://anonymous@example.com" # Sensitive import telnetlib cnx = telnetlib.Telnet("towel.blinkenlights.nl") # Sensitive import ftplib cnx = ftplib.FTP("ftp.example.com") # Sensitive import smtplib smtp = smtplib.SMTP("smtp.example.com", port=587) # Sensitive For aws_cdk.aws_elasticloadbalancingv2.ApplicationLoadBalancer: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) lb = elbv2.ApplicationLoadBalancer( self, "LB", vpc=vpc, internet_facing=True ) lb.add_listener( "Listener-default", port=80, # Sensitive open=True ) lb.add_listener( "Listener-http-explicit", protocol=elbv2.ApplicationProtocol.HTTP, # Sensitive port=8080, open=True ) For aws_cdk.aws_elasticloadbalancingv2.ApplicationListener: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) elbv2.ApplicationListener( self, "listener-http-explicit-const", load_balancer=lb, protocol=elbv2.ApplicationProtocol.HTTP, # Sensitive port=8081, open=True ) For aws_cdk.aws_elasticloadbalancingv2.NetworkLoadBalancer: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) lb = elbv2.NetworkLoadBalancer( self, "LB", vpc=vpc, internet_facing=True ) lb.add_listener( # Sensitive "Listener-default", port=1234 ) lb.add_listener( "Listener-TCP-explicit", protocol=elbv2.Protocol.TCP, # Sensitive port=1337 ) For aws_cdk.aws_elasticloadbalancingv2.NetworkListener: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) elbv2.NetworkListener( self, "Listener-TCP-explicit", protocol=elbv2.Protocol.TCP, # Sensitive port=1338, load_balancer=lb ) For aws_cdk.aws_elasticloadbalancingv2.CfnListener: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) elbv2.CfnListener( self, "listener-http", default_actions=[application_default_action], load_balancer_arn=lb.load_balancer_arn, protocol="HTTP", # Sensitive port=80 ) elbv2.CfnListener( self, "listener-tcp", default_actions=[network_default_action], load_balancer_arn=lb.load_balancer_arn, protocol="TCP", # Sensitive port=1000 ) For aws_cdk.aws_elasticloadbalancing.LoadBalancerListener: from aws_cdk import ( aws_elasticloadbalancing as elb, ) elb.LoadBalancerListener( external_port=10000, external_protocol=elb.LoadBalancingProtocol.TCP, # Sensitive internal_port=10000 ) elb.LoadBalancerListener( external_port=10080, external_protocol=elb.LoadBalancingProtocol.HTTP, # Sensitive internal_port=10080 ) For aws_cdk.aws_elasticloadbalancing.CfnLoadBalancer: from aws_cdk import ( aws_elasticloadbalancing as elb ) elb.CfnLoadBalancer( self, "elb-tcp", listeners=[ elb.CfnLoadBalancer.ListenersProperty( instance_port="10000", load_balancer_port="10000", protocol="tcp" # Sensitive ) ], subnets=vpc.select_subnets().subnet_ids ) elb.CfnLoadBalancer( self, "elb-http-dict", listeners=[ { "instancePort":"10000", "loadBalancerPort":"10000", "protocol":"http" # Sensitive } ], subnets=vpc.select_subnets().subnet_ids ) For aws_cdk.aws_elasticloadbalancing.LoadBalancer: from aws_cdk import ( aws_elasticloadbalancing as elb, ) elb.LoadBalancer( self, "elb-tcp-dict", vpc=vpc, listeners=[ { "externalPort":10000, "externalProtocol":elb.LoadBalancingProtocol.TCP, # Sensitive "internalPort":10000 } ] ) loadBalancer.add_listener( external_port=10081, external_protocol=elb.LoadBalancingProtocol.HTTP, # Sensitive internal_port=10081 ) loadBalancer.add_listener( external_port=10001, external_protocol=elb.LoadBalancingProtocol.TCP, # Sensitive internal_port=10001 ) For aws_cdk.aws_elasticache.CfnReplicationGroup: from aws_cdk import ( aws_elasticache as elasticache ) elasticache.CfnReplicationGroup( self, "unencrypted-explicit", replication_group_description="a replication group", automatic_failover_enabled=False, transit_encryption_enabled=False, # Sensitive cache_subnet_group_name="test", engine="redis", engine_version="3.2.6", num_cache_clusters=1, cache_node_type="cache.t2.micro" ) elasticache.CfnReplicationGroup( # Sensitive, encryption is disabled by default self, "unencrypted-implicit", replication_group_description="a test replication group", automatic_failover_enabled=False, cache_subnet_group_name="test", engine="redis", engine_version="3.2.6", num_cache_clusters=1, cache_node_type="cache.t2.micro" ) For aws_cdk.aws_kinesis.CfnStream: from aws_cdk import ( aws_kinesis as kinesis, ) kinesis.CfnStream( # Sensitive, encryption is disabled by default for CfnStreams self, "cfnstream-implicit-unencrytped", shard_count=1 ) kinesis.CfnStream(self, "cfnstream-explicit-unencrytped", shard_count=1, stream_encryption=None # Sensitive ) For aws_cdk.aws_kinesis.Stream: from aws_cdk import ( aws_kinesis as kinesis, ) stream = kinesis.Stream(self, "stream-explicit-unencrypted", shard_count=1, encryption=kinesis.StreamEncryption.UNENCRYPTED # Sensitive ) Compliant Solutionurl = "https://example.com" url = "sftp://anonymous@example.com" url = "ssh://anonymous@example.com" import ftplib cnx = ftplib.FTP_TLS("ftp.example.com") import smtplib smtp = smtplib.SMTP("smtp.example.com", port=587) smtp.starttls(context=context) smtp_ssl = smtplib.SMTP_SSL("smtp.gmail.com", port=465) For aws_cdk.aws_elasticloadbalancingv2.ApplicationLoadBalancer: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) lb = elbv2.ApplicationLoadBalancer( self, "LB", vpc=vpc, internet_facing=True ) lb.add_listener( "Listener-https-explicit", protocol=elbv2.ApplicationProtocol.HTTPS, certificates=[elbv2.ListenerCertificate("certificateARN")], port=443, open=True ) lb.add_listener( "Listener-https-implicit", certificates=[elbv2.ListenerCertificate("certificateARN")], port=8443, open=True ) For aws_cdk.aws_elasticloadbalancingv2.ApplicationListener: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) elbv2.ApplicationListener( self, "listener-https-explicit-const", load_balancer=lb, protocol=elbv2.ApplicationProtocol.HTTPS, certificates=[elbv2.ListenerCertificate("certificateARN")], port=444, open=True ) For aws_cdk.aws_elasticloadbalancingv2.NetworkLoadBalancer: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) lb = elbv2.NetworkLoadBalancer( self, "LB", vpc=vpc, internet_facing=True ) lb.add_listener( "Listener-TLS-explicit", protocol=elbv2.Protocol.TLS, certificates=[elbv2.ListenerCertificate("certificateARN")], port=443 ) lb.add_listener( "Listener-TLS-implicit", certificates=[elbv2.ListenerCertificate("certificateARN")], port=1024 ) For aws_cdk.aws_elasticloadbalancingv2.NetworkListener: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) elbv2.NetworkListener( self, "Listener-TLS-explicit", protocol=elbv2.Protocol.TLS, certificates=[elbv2.ListenerCertificate("certificateARN")], port=443, load_balancer=lb ) For aws_cdk.aws_elasticloadbalancingv2.CfnListener: from aws_cdk import ( aws_elasticloadbalancingv2 as elbv2, ) elbv2.CfnListener( self, "listener-https", default_actions=[application_default_action], load_balancer_arn=lb.load_balancer_arn, protocol="HTTPS", port=443, certificates=[elbv2.CfnListener.CertificateProperty( certificate_arn="certificateARN" )] ) elbv2.CfnListener( self, "listener-tls", default_actions=[network_default_action], load_balancer_arn=lb.load_balancer_arn, protocol="TLS", port=1001, certificates=[elbv2.CfnListener.CertificateProperty( certificate_arn="certificateARN" )] ) For aws_cdk.aws_elasticloadbalancing.LoadBalancerListener: from aws_cdk import ( aws_elasticloadbalancing as elb, ) elb.LoadBalancerListener( external_port=10043, external_protocol=elb.LoadBalancingProtocol.SSL, internal_port=10043, ssl_certificate_arn="certificateARN" ) elb.LoadBalancerListener( external_port=10443, external_protocol=elb.LoadBalancingProtocol.HTTPS, internal_port=10443, ssl_certificate_arn="certificateARN" ) For aws_cdk.aws_elasticloadbalancing.CfnLoadBalancer: from aws_cdk import ( aws_elasticloadbalancing as elb, ) elb.CfnLoadBalancer( self, "elb-ssl", listeners=[ elb.CfnLoadBalancer.ListenersProperty( instance_port="10043", load_balancer_port="10043", protocol="ssl", ssl_certificate_id=CERTIFICATE_ARN ) ], subnets=vpc.select_subnets().subnet_ids ) elb.CfnLoadBalancer( self, "elb-https-dict", listeners=[ { "instancePort":"10443", "loadBalancerPort":"10443", "protocol":"https", "sslCertificateId":CERTIFICATE_ARN } ], subnets=vpc.select_subnets().subnet_ids ) For aws_cdk.aws_elasticloadbalancing.LoadBalancer: from aws_cdk import ( aws_elasticloadbalancing as elb, ) elb.LoadBalancer( self, "elb-ssl", vpc=vpc, listeners=[ { "externalPort":10044, "externalProtocol":elb.LoadBalancingProtocol.SSL, "internalPort":10044, "sslCertificateArn":"certificateARN" }, { "externalPort":10444, "externalProtocol":elb.LoadBalancingProtocol.HTTPS, "internalPort":10444, "sslCertificateArn":"certificateARN" } ] ) loadBalancer = elb.LoadBalancer( self, "elb-multi-listener", vpc=vpc ) loadBalancer.add_listener( external_port=10045, external_protocol=elb.LoadBalancingProtocol.SSL, internal_port=10045, ssl_certificate_arn="certificateARN" ) loadBalancer.add_listener( external_port=10445, external_protocol=elb.LoadBalancingProtocol.HTTPS, internal_port=10445, ssl_certificate_arn="certificateARN" ) For aws_cdk.aws_elasticache.CfnReplicationGroup: from aws_cdk import ( aws_elasticache as elasticache ) elasticache.CfnReplicationGroup( self, "encrypted-explicit", replication_group_description="a test replication group", automatic_failover_enabled=False, transit_encryption_enabled=True, cache_subnet_group_name="test", engine="redis", engine_version="3.2.6", num_cache_clusters=1, cache_node_type="cache.t2.micro" ) For aws_cdk.aws_kinesis.CfnStream: from aws_cdk import ( aws_kinesis as kinesis, ) kinesis.CfnStream( self, "cfnstream-explicit-encrytped", shard_count=1, stream_encryption=kinesis.CfnStream.StreamEncryptionProperty( encryption_type="KMS", key_id="alias/aws/kinesis" ) ) stream = kinesis.CfnStream( self, "cfnstream-explicit-encrytped-dict", shard_count=1, stream_encryption={ "encryptionType": "KMS", "keyId": "alias/aws/kinesis" } ) For aws_cdk.aws_kinesis.Stream: from aws_cdk import ( aws_kinesis as kinesis, aws_kms as kms ) stream = kinesis.Stream( # Encryption is enabled by default for Streams self, "stream-implicit-encrypted", shard_count=1 ) stream = kinesis.Stream( self, "stream-explicit-encrypted-managed", shard_count=1, encryption=kinesis.StreamEncryption.MANAGED ) key = kms.Key(self, "managed_key") stream = kinesis.Stream( self, "stream-explicit-encrypted-selfmanaged", shard_count=1, encryption=kinesis.StreamEncryption.KMS, encryption_key=key ) ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
python:S6302 |
A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information. Ask Yourself WhetherIdentities obtaining all the permissions:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used. Sensitive Code ExampleA customer-managed policy that grants all permissions by using the wildcard (*) in the from aws_cdk.aws_iam import PolicyStatement, Effect PolicyStatement( effect=Effect.ALLOW, actions=["*"], # Sensitive resources=["arn:aws:iam:::user/*"] ) Compliant SolutionA customer-managed policy that grants only the required permissions: from aws_cdk.aws_iam import PolicyStatement, Effect PolicyStatement( effect=Effect.ALLOW, actions=["iam:GetAccountSummary"], resources=["arn:aws:iam:::user/*"] ) See
|
||||||||||||
python:S6303 |
Using unencrypted RDS DB resources exposes data to unauthorized access. This situation can occur in a variety of scenarios, such as:
After a successful intrusion, the underlying applications are exposed to:
AWS-managed encryption at rest reduces this risk with a simple switch. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine. Sensitive Code ExampleFor aws_cdk.aws_rds.DatabaseCluster and aws_cdk.aws_rds.DatabaseInstance: from aws_cdk import ( aws_rds as rds ) class DatabaseStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) rds.DatabaseCluster( # Sensitive, unencrypted by default self, "example" ) For aws_cdk.aws_rds.CfnDBCluster and aws_cdk.aws_rds.CfnDBInstance: from aws_cdk import ( aws_rds as rds ) class DatabaseStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) rds.CfnDBCluster( # Sensitive, unencrypted by default self, "example" ) Compliant SolutionFor aws_cdk.aws_rds.DatabaseCluster and aws_cdk.aws_rds.DatabaseInstance: from aws_cdk import ( aws_rds as rds ) class DatabaseStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) rds.DatabaseCluster( self, "example", storage_encrypted=True ) For aws_cdk.aws_rds.CfnDBCluster and aws_cdk.aws_rds.CfnDBInstance: from aws_cdk import ( aws_rds as rds ) class DatabaseStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) rds.CfnDBCluster( self, "example", storage_encrypted=True ) See
|
||||||||||||
python:S6785 |
GraphQL servers are vulnerable to Denial of Service attacks when they fail to limit the depth of queries. In such a case, an attacker is able to craft complex, deeply nested queries to make the application unwillingly consume an important amount of resources. Why is this an issue?When a server receives a deeply nested query, it attempts to resolve all the requested data. This process can consume a substantial amount of computational resources, leading to a slowdown in server response times. What is the potential impact?A server that faces a resource exhaustion situation can become unstable. The exact impact will depend on how the affected application is deployed and how well the hosting server configuration is hardened. In the worst case, when the application is deployed in an uncontained environment, directly on its host system, the memory exhaustion will affect the whole hosting server. The server’s operating system might start killing arbitrary memory-intensive processes, including the main application or other sensitive ones. This will result in a general operating failure, also known as a Denial of Service (DoS). In cases where the application is deployed in a virtualized or otherwise contained environment, or where resource usage limits are in place, the consequences are limited to the vulnerable application only. In that case, other processes and applications hosted on the same server may keep on running without perturbation. The vulnerable application will still stop working properly. In general, that kind of DoS attack can have severe financial consequences. They are particularly important when the affected systems are business-critical. How to fix itCode examplesNoncompliant code examplefrom graphql_server.flask import GraphQLView app.add_url_rule("/api", view_func=GraphQLView.as_view( # Noncompliant name="api", schema=schema, ) ) Compliant solutionfrom graphql_server.flask import GraphQLView from graphene.validation import depth_limit_validator app.add_url_rule("/api", view_func=GraphQLView.as_view( name="api", schema=schema, validation_rules=[ depth_limit_validator(10) # Choose a value that fits your application's requirements ] ) ) How does this work?Avoid circular referencesA prerequisite for deeply nested query to be executed is the presence of circular references in the database schema. Avoid or minimize circular references when designing the application’s database schema. Set limitsLimit the depth of the queries your server will accept. By setting a maximum depth, you can ensure that excessively nested queries are rejected. Remember, the values for maximum depth and complexity should be set according to your application’s specific needs. Setting these limits too low could restrict legitimate queries, while setting them too high could leave your server vulnerable to attacks. The easiest way to set such a limit is to use the query validation API available from Graphene 3. Applications running Graphene 2 should consider upgrading to Graphene 3 to benefit from this API. ResourcesStandards |
||||||||||||
python:S6786 |
This vulnerability exposes information about all the APIs available on a GraphQL API server. This information can be used to discover weaknesses in the API that can be exploited. Why is this an issue?GraphQL introspection is a feature that allows client applications to query the schema of a GraphQL API at runtime. It provides a way for developers to explore and understand the available data and operations supported by the API. This feature is a diagnostic tool that should only be used in the development phase as its presence also creates risks. Clear documentation and API references should be considered better discoverability tools for a public GraphQL API. What is the potential impact?An attacker can use introspection to identify all of the operations and data types supported by the server. This information can then be used to identify potential targets for attacks. Exploitation of private APIsEven when a GraphQL API server is open to access by third-party applications, it may contain APIs that are intended only for private use. Introspection allows these private APIs to be discovered. Private APIs often do not receive the same level of security rigor as public APIs. For example, they may skip input validation because the API is only expected to be called from trusted applications. This can create avenues for attack that are not present on public APIs. Exposure of sensitive dataGraphQL allows for multiple related objects to be retrieved using a single API call. This provides an efficient method of obtaining data for use in a client application. An attacker may be able to use these relationships between objects to traverse the data structure. They may be able to find a link to sensitive data that the developer did not intentionally make available. How to fix itCode examplesNoncompliant code examplefrom graphql_server.flask import GraphQLView app.add_url_rule("/api", view_func=GraphQLView.as_view( # Noncompliant name="api", schema=schema, ) ) Compliant solutionMake sure that introspection is disabled in production environments. You can use the following code sample as a reference, in conjunction with your own methods for distinguishing between production and non-production environments. from graphql_server.flask import GraphQLView # Only one of the following needs to be used from graphql.validation import NoSchemaIntrospectionCustomRule # graphql-core v3 from graphene.validation import DisableIntrospection # graphene v3 app.add_url_rule("/api", view_func=GraphQLView.as_view( name="api", schema=schema, validation_rules=[ NoSchemaIntrospectionCustomRule, DisableIntrospection, ] ) ) How does this work?Disabling introspectionThe GraphQL server framework should be instructed to disable introspection in production environments. This prevents any attacker attempt to retrieve schema information from the server at runtime. Each GraphQL framework will have a different method of doing this, possibly including:
If introspection is required, it should only be made available to the smallest possible audience. This could include development environments, users with a specific right, or requests from a specific set of IP addresses. ResourcesArticles & blog posts
Standards |
||||||||||||
python:S6308 |
Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated. To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:
Thus, adversaries cannot access the data if they gain physical access to the storage medium. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to encrypt OpenSearch domains that contain sensitive information. OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary. Sensitive Code ExampleFor aws_cdk.aws_opensearchservice.Domain: from aws_cdk.aws_opensearchservice import Domain, EngineVersion class DomainStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) Domain(self, "Sensitive", version=EngineVersion.OPENSEARCH_1_3 ) # Sensitive, encryption is disabled by default For aws_cdk.aws_opensearchservice.CfnDomain: from aws_cdk.aws_opensearchservice import CfnDomain class CfnDomainStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) CfnDomain(self, "Sensitive") # Sensitive, encryption is disabled by default Compliant SolutionFor aws_cdk.aws_opensearchservice.Domain: from aws_cdk.aws_opensearchservice import Domain, EncryptionAtRestOptions, EngineVersion class DomainStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) Domain(self, "Compliant", version=EngineVersion.OPENSEARCH_1_3, encryption_at_rest=EncryptionAtRestOptions( enabled=True ) ) For aws_cdk.aws_opensearchservice.CfnDomain: from aws_cdk.aws_opensearchservice import CfnDomain class CfnDomainStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) CfnDomain(self, "Compliant", encryption_at_rest_options=CfnDomain.EncryptionAtRestOptionsProperty( enabled=True ) ) See
|
||||||||||||
python:S6781 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?If a JWT secret key leaks to an unintended audience, it can have serious security implications for the corresponding application. The secret key is used to encode and decode JWTs when using a symmetric signing algorithm, and an attacker could potentially use it to perform malicious actions. For example, an attacker could use the secret key to create their own authentication tokens that appear to be legitimate, allowing them to bypass authentication and gain access to sensitive data or functionality. In the worst-case scenario, an attacker could be able to execute arbitrary code on the application by abusing administrative features, and take over its hosting server. How to fix it in FlaskRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Changing the secret value is sufficient to invalidate any data that it protected. Code examplesNoncompliant code exampleThe following noncompliant code contains a hard-coded secret that can be exposed unintentionally. from flask import Flask app = Flask(__name__) app.config['JWT_SECRET_KEY'] = secret_key # Noncompliant Compliant solutionA solution is to set this secret in an environment string. from flask import Flask import os app = Flask(__name__) app.config['JWT_SECRET_KEY'] = os.environ["JWT_SECRET_KEY"] Going the extra mileUse a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. ResourcesStandards
Documentation
|
||||||||||||
python:S2077 |
Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplefrom django.db import models from django.db import connection from django.db import connections from django.db.models.expressions import RawSQL value = input() class MyUser(models.Model): name = models.CharField(max_length=200) def query_my_user(request, params, value): with connection.cursor() as cursor: cursor.execute("{0}".format(value)) # Sensitive # https://docs.djangoproject.com/en/2.1/ref/models/expressions/#raw-sql-expressions RawSQL("select col from %s where mycol = %s and othercol = " + value, ("test",)) # Sensitive # https://docs.djangoproject.com/en/2.1/ref/models/querysets/#extra MyUser.objects.extra( select={ 'mycol': "select col from sometable here mycol = %s and othercol = " + value}, # Sensitive select_params=(someparam,), }, ) Compliant Solutioncursor = connection.cursor(prepared=True) sql_insert_query = """ select col from sometable here mycol = %s and othercol = %s """ select_tuple = (1, value) cursor.execute(sql_insert_query, select_tuple) # Compliant, the query is parameterized connection.commit() See
|
||||||||||||
python:S6317 |
Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access. Why is this an issue?AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources. For such policies, it is easy to define very broad permissions (by using wildcard If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities. What is the potential impact?AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope. Privilege escalationWhen IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities. For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account. How to fix it in AWS CDKCode examplesIn this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges. Noncompliant code examplefrom aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement PolicyDocument( statements=[ PolicyStatement( effect=Effect.ALLOW, actions=["lambda:UpdateFunctionCode"], resources=["*"] # Noncompliant ) ] ) Compliant solutionThe policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed. from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement PolicyDocument( statements=[ PolicyStatement( effect=Effect.ALLOW, actions=["lambda:UpdateFunctionCode"], resources=[ "arn:aws:lambda:us-east-2:123456789012:function:my-function:1" ] ) ] ) How does this work?Principle of least privilegeWhen creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else. To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used. ResourcesDocumentation
Articles & blog posts
Standards |
||||||||||||
python:S6437 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. Application’s security downgradeA downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component. For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesThe following code example is noncompliant because it uses a hardcoded secret value. Noncompliant code examplefrom requests_oauthlib.oauth2_session import OAuth2Session scope = ['https://www.api.example.com/auth/example.data'] oauth = OAuth2Session( 'example_client_id', redirect_uri='https://callback.example.com/uri', scope=scope) token = oauth.fetch_token( 'https://api.example.com/o/oauth2/token', client_secret='example_Password') # Noncompliant Compliant solutionfrom os import environ from requests_oauthlib.oauth2_session import OAuth2Session scope = ['https://www.api.example.com/auth/example.data'] oauth = OAuth2Session( 'example_client_id', redirect_uri='https://callback.example.com/uri', scope=scope) password = environ.get('OAUTH_SECRET') token = oauth.fetch_token( 'https://api.example.com/o/oauth2/token', client_secret=password) How does this work?While the noncompliant code example contains a hard-coded password, the compliant solution retrieves the secret’s value from its environment. This allows to have an environment-dependent secret value and avoids storing the password in the source code itself. Depending on the application and its underlying infrastructure, how the secret gets added to the environment might change. ResourcesDocumentation
Standards |
||||||||||||
python:S2755 |
This vulnerability allows the usage of external entities in XML. Why is this an issue?External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack. What is the potential impact?Exposing sensitive dataOne significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information. Exhausting system resourcesAnother consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience. Forging requestsXXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure. How to fix it in Python Standard LibraryCode examplesThe following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed. Noncompliant code exampleimport xml.sax parser = xml.sax.make_parser() myHandler = MyHandler() parser.setContentHandler(myHandler) parser.setFeature(feature_external_ges, True) # Noncompliant parser.parse('xxe.xml') Compliant solutionThe SAX parser does not process general external entities by default since version 3.7.1. import xml.sax parser = xml.sax.make_parser() myHandler = MyHandler() parser.setContentHandler(myHandler) parser.setFeature(feature_external_ges, False) parser.parse('xxe.xml') How does this work?Disable external entitiesThe most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework. If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved
during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are
processed. ResourcesStandards |
||||||||||||
python:S6319 |
Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary. Sensitive Code ExampleFor aws_cdk.aws_sagemaker.CfnNotebookInstance: from aws_cdk import ( aws_sagemaker as sagemaker ) class CfnSagemakerStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) sagemaker.CfnNotebookInstance( self, "Sensitive", instance_type="instanceType", role_arn="roleArn" ) # Sensitive, no KMS key is set by default; thus, encryption is disabled Compliant SolutionFor aws_cdk.aws_sagemaker.CfnNotebookInstance: from aws_cdk import ( aws_sagemaker as sagemaker, aws_kms as kms ) class CfnSagemakerStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) my_key = kms.Key(self, "Key") sagemaker.CfnNotebookInstance( self, "Compliant", instance_type="instanceType", role_arn="roleArn", kms_key_id=my_key.key_id ) See |
||||||||||||
python:S5439 |
This rule is deprecated; use S5247 instead. Why is this an issue?Template engines have an HTML autoescape mechanism that protects web applications against most common cross-site-scripting (XSS) vulnerabilities. By default, it automatically replaces HTML special characters in any template variables. This secure by design configuration should not be globally disabled. Escaping HTML from template variables prevents switching into any execution context, like A successful exploitation of a cross-site-scripting vulnerability by an attacker allow him to execute malicious JavaScript code in a user’s web browser. The most severe XSS attacks involve:
This rule supports the following libraries: Noncompliant code examplefrom jinja2 import Environment env = Environment() # Noncompliant; New Jinja2 Environment has autoescape set to false env = Environment(autoescape=False) # Noncompliant Compliant solutionfrom jinja2 import Environment env = Environment(autoescape=True) # Compliant Resources
|
||||||||||||
python:S6779 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?If a Flask secret key leaks to an unintended audience, it can have serious security implications for the corresponding application. The secret key is used to sign cookies and other sensitive data so that an attacker could potentially use it to perform malicious actions. For example, an attacker could use the secret key to create their own cookies that appear to be legitimate, allowing them to bypass authentication and gain access to sensitive data or functionality. In the worst-case scenario, an attacker could be able to execute arbitrary code on the application and take over its hosting server. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. In Flask, changing the secret value is sufficient to invalidate any data that it protected. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesNoncompliant code examplefrom flask import Flask app = Flask(__name__) app.config['SECRET_KEY'] = "secret" # Noncompliant Compliant solutionfrom flask import Flask import os app = Flask(__name__) app.config['SECRET_KEY'] = os.environ["SECRET_KEY"] ResourcesStandards
Documentation
|
||||||||||||
python:S1523 |
This rule is deprecated, and will eventually be removed. Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities: Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon because they also increase the risk of maliciously Injected Code. Such attacks can either run on the server or in the client (example: XSS attack) and have a huge impact on an application’s security. This rule marks for review each occurrence of such dynamic code execution. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesRegarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser). Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way. Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer. Sensitive Code Exampleimport os value = input() command = 'os.system("%s")' % value def evaluate(command, file, mode): eval(command) # Sensitive. eval(command) # Sensitive. Dynamic code def execute(code, file, mode): exec(code) # Sensitive. exec(compile(code, file, mode)) # Sensitive. exec(command) # Sensitive. See |
||||||||||||
python:S2612 |
In Unix file system permissions, the " Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesThe most restrictive possible permissions should be assigned to files and directories. Sensitive Code ExampleFor os.umask: os.umask(0) # Sensitive For os.chmod, os.lchmod, and os.fchmod: os.chmod("/tmp/fs", stat.S_IRWXO) # Sensitive os.lchmod("/tmp/fs", stat.S_IRWXO) # Sensitive os.fchmod(fd, stat.S_IRWXO) # Sensitive Compliant SolutionFor os.umask: os.umask(0o777) For os.chmod, os.lchmod, and os.fchmod: os.chmod("/tmp/fs", stat.S_IRWXU) os.lchmod("/tmp/fs", stat.S_IRWXU) os.fchmod(fd, stat.S_IRWXU) See
|
||||||||||||
python:S5443 |
Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like
In the past, it has led to the following vulnerabilities: This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplefile = open("/tmp/temporary_file","w+") # Sensitive tmp_dir = os.environ.get('TMPDIR') # Sensitive file = open(tmp_dir+"/temporary_file","w+") Compliant Solutionimport tempfile file = tempfile.TemporaryFile(dir="/tmp/my_subdirectory", mode='"w+") # Compliant See
|
||||||||||||
python:S5445 |
Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic. Why is this an issue?Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it. In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues. What is the potential impact?Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it. Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise. Information disclosureBecause attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive. For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements. Attack surface extensionAn application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise. For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over. How to fix itCode examplesThe following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function. Noncompliant code exampleimport tempfile filename = tempfile.mktemp() # Noncompliant tmp_file = open(filename, "w+") Compliant solutionimport tempfile tmp_file1 = tempfile.NamedTemporaryFile(delete=False) tmp_file2 = tempfile.NamedTemporaryFile() How does this work?Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks. Use a secure API functionTemporary files handling APIs generally provide secure functions to create temporary files. In most cases, they operate in an atomical way, creating and opening a file with a unique and unpredictable name in a single call. Those functions can often be used to replace less secure alternatives without requiring important development efforts. Here, the example compliant code uses the more secure Strong security controlsTemporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose. In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:
Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them. ResourcesDocumentation
Standards |
||||||||||||
python:S2053 |
This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes. Why is this an issue?During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords. However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital. What is the potential impact?Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need. Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster. If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once. A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before. With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred. ExceptionsTo securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:
When they are used for password storage, using a secure, random salt is required. However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted. How to fix it in Python Standard LibraryCode examplesThe following code contains examples of hard-coded salts. Noncompliant code exampleimport hashlib hash = hashlib.scrypt(password, salt=b"F3MdWpeHeeSjlUxvKBnzzA", n=2**17, r=8, p=1) # Noncompliant Compliant solutionimport hashlib import secrets salt = secrets.token_bytes(32) hash = hashlib.scrypt(password, salt=salt, n=2**17, r=8, p=1) How does this work?This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards. Here, the compliant code example ensures the salt is random and has a sufficient length by calling the ResourcesStandards |
||||||||||||
python:S3752 |
An HTTP method is safe when used to perform a read-only operation, such as retrieving information. In contrast, an unsafe HTTP method is used to change the state of an application, for instance to update a user’s profile on a web application. Common safe HTTP methods are GET, HEAD, or OPTIONS. Common unsafe HTTP methods are POST, PUT and DELETE. Allowing both safe and unsafe HTTP methods to perform a specific operation on a web application could impact its security, for example CSRF protections are most of the time only protecting operations performed by unsafe HTTP methods. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesFor all the routes/controllers of an application, the authorized HTTP methods should be explicitly defined and safe HTTP methods should only be used to perform read-only operations. Sensitive Code ExampleFor Django: # No method restriction def view(request): # Sensitive return HttpResponse("...") @require_http_methods(["GET", "POST"]) # Sensitive def view(request): return HttpResponse("...") For Flask: @methods.route('/sensitive', methods=['GET', 'POST']) # Sensitive def view(): return Response("...", 200) Compliant SolutionFor Django: @require_http_methods(["POST"]) def view(request): return HttpResponse("...") @require_POST def view(request): return HttpResponse("...") @require_GET def view(request): return HttpResponse("...") @require_safe def view(request): return HttpResponse("...") For Flask: @methods.route('/compliant1') def view(): return Response("...", 200) @methods.route('/compliant2', methods=['GET']) def view(): return Response("...", 200) See
|
||||||||||||
python:S4721 |
This rule is deprecated, and will eventually be removed. Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesUse functions that don’t spawn a shell. Sensitive Code ExamplePython 3 subprocess.run(cmd, shell=True) # Sensitive subprocess.Popen(cmd, shell=True) # Sensitive subprocess.call(cmd, shell=True) # Sensitive subprocess.check_call(cmd, shell=True) # Sensitive subprocess.check_output(cmd, shell=True) # Sensitive os.system(cmd) # Sensitive: a shell is always spawn Python 2 cmd = "when a string is passed through these function, a shell is spawn" (_, child_stdout, _) = os.popen2(cmd) # Sensitive (_, child_stdout, _) = os.popen3(cmd) # Sensitive (_, child_stdout) = os.popen4(cmd) # Sensitive (child_stdout, _) = popen2.popen2(cmd) # Sensitive (child_stdout, _, _) = popen2.popen3(cmd) # Sensitive (child_stdout, _) = popen2.popen4(cmd) # Sensitive Compliant SolutionPython 3 # by default shell=False, a shell is not spawn subprocess.run(cmd) # Compliant subprocess.Popen(cmd) # Compliant subprocess.call(cmd) # Compliant subprocess.check_call(cmd) # Compliant subprocess.check_output(cmd) # Compliant # always in a subprocess: os.spawnl(mode, path, *cmd) # Compliant os.spawnle(mode, path, *cmd, env) # Compliant os.spawnlp(mode, file, *cmd) # Compliant os.spawnlpe(mode, file, *cmd, env) # Compliant os.spawnv(mode, path, cmd) # Compliant os.spawnve(mode, path, cmd, env) # Compliant os.spawnvp(mode, file, cmd) # Compliant os.spawnvpe(mode, file, cmd, env) # Compliant (child_stdout) = os.popen(cmd, mode, 1) # Compliant (_, output) = subprocess.getstatusoutput(cmd) # Compliant out = subprocess.getoutput(cmd) # Compliant os.startfile(path) # Compliant os.execl(path, *cmd) # Compliant os.execle(path, *cmd, env) # Compliant os.execlp(file, *cmd) # Compliant os.execlpe(file, *cmd, env) # Compliant os.execv(path, cmd) # Compliant os.execve(path, cmd, env) # Compliant os.execvp(file, cmd) # Compliant os.execvpe(file, cmd, env) # Compliant Python 2 cmdsargs = ("use", "a", "sequence", "to", "directly", "start", "a", "subprocess") (_, child_stdout) = os.popen2(cmdsargs) # Compliant (_, child_stdout, _) = os.popen3(cmdsargs) # Compliant (_, child_stdout) = os.popen4(cmdsargs) # Compliant (child_stdout, _) = popen2.popen2(cmdsargs) # Compliant (child_stdout, _, _) = popen2.popen3(cmdsargs) # Compliant (child_stdout, _) = popen2.popen4(cmdsargs) # Compliant See |
||||||||||||
python:S6463 |
Allowing unrestricted outbound communications can lead to data leaks. A restrictive security group is an additional layer of protection that might prevent the abuse or exploitation of a resource. For example, it complicates the exfiltration of data in the case of a successfully exploited vulnerability. When deciding if outgoing connections should be limited, consider that limiting the connections results in additional administration and maintenance work. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to restrict outgoing connections to a set of trusted destinations. Sensitive Code ExampleFor aws_cdk.aws_ec2.SecurityGroup: from aws_cdk import ( aws_ec2 as ec2 ) ec2.SecurityGroup( # Sensitive; allow_all_outbound is enabled by default self, "example", vpc=vpc ) Compliant SolutionFor aws_cdk.aws_ec2.SecurityGroup: from aws_cdk import ( aws_ec2 as ec2 ) sg = ec2.SecurityGroup( self, "example", vpc=vpc, allow_all_outbound=False ) sg.add_egress_rule( peer=ec2.Peer.ipv4("203.0.113.127/32"), connection=ec2.Port.tcp(443) ) See
|
||||||||||||
python:S6327 |
Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary. Sensitive Code Examplefrom aws_cdk import ( aws_sns as sns ) class TopicStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) sns.Topic( # Sensitive, unencrypted by default self, "example" ) from aws_cdk import ( aws_sns as sns ) class TopicStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) sns.CfnTopic( # Sensitive, unencrypted by default self, "example" ) Compliant Solutionfrom aws_cdk import ( aws_sns as sns ) class TopicStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) my_key = kms.Key(self, "key") sns.Topic( self, "example", master_key=my_key ) from aws_cdk import ( aws_sns as sns ) class TopicStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) my_key = kms.Key(self, "key") sns.CfnTopic( self, "example", kms_master_key_id=my_key.key_id ) See |
||||||||||||
python:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code Exampleip = '192.168.12.42' sock = socket.socket() sock.bind((ip, 9090)) Compliant Solutionip = config.get(section, ipAddress) sock = socket.socket() sock.bind((ip, 9090)) ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See |
||||||||||||
python:S4823 |
This rule is deprecated, and will eventually be removed. Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities: Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized. Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure. This rule raises an issue on every reference to Ask Yourself Whether
If you answered yes to any of these questions you are at risk. Recommended Secure Coding PracticesSanitize all command line arguments before using them. Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information. See |
||||||||||||
python:S4828 |
Signaling processes or process groups can seriously affect the stability of this application or other applications on the same system. Accidentally setting an incorrect Also, the system treats the signal differently if the destination Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleimport os @app.route("/kill-pid/<pid>") def send_signal(pid): os.kill(pid, 9) # Sensitive @app.route("/kill-pgid/<pgid>") def send_signal(pgid): os.killpg(pgid, 9) # Sensitive Compliant Solutionimport os @app.route("/kill-pid/<pid>") def send_signal(pid): # Validate the untrusted PID, # With a pre-approved list or authorization checks if is_valid_pid(pid): os.kill(pid, 9) @app.route("/kill-pgid/<pgid>") def send_signal(pgid): # Validate the untrusted PGID, # With a pre-approved list or authorization checks if is_valid_pgid(pgid): os.kill(pgid, 9) See |
||||||||||||
python:S4829 |
This rule is deprecated, and will eventually be removed. Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities: It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated. This rule flags code that reads from the standard input. Ask Yourself Whether
You are at risk if you answered yes to this question. Recommended Secure Coding PracticesSanitize all data read from the standard input before using it. Sensitive Code ExamplePython 2 and Python 3 import sys from sys import stdin, __stdin__ # Any reference to sys.stdin or sys.__stdin__ without a method call is Sensitive sys.stdin # Sensitive for line in sys.stdin: # Sensitive print(line) it = iter(sys.stdin) # Sensitive line = next(it) # Calling the following methods on stdin or __stdin__ is sensitive sys.stdin.read() # Sensitive sys.stdin.readline() # Sensitive sys.stdin.readlines() # Sensitive # Calling other methods on stdin or __stdin__ does not require a review, thus it is not Sensitive sys.stdin.seekable() # Ok # ... Python 2 only raw_input('What is your password?') # Sensitive Python 3 only input('What is your password?') # Sensitive Function for line in fileinput.input(): # Sensitive print(line) for line in fileinput.FileInput(): # Sensitive print(line) for line in fileinput.input(['setup.py']): # Ok print(line) for line in fileinput.FileInput(['setup.py']): # Ok print(line) See |
||||||||||||
python:S6329 |
Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption. Depending on the component, inbound access from the Internet can be enabled via:
Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident. This decision increases the likelihood of attacks on the organization, such as:
Ask Yourself WhetherThis cloud resource:
There is a risk if you answered no to any of those questions. Recommended Secure Coding PracticesAvoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites. Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components. The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address. Sensitive Code ExampleFor aws_cdk.aws_ec2.Instance and similar constructs: from aws_cdk import aws_ec2 as ec2 ec2.Instance( self, "vpc_subnet_public", instance_type=nano_t2, machine_image=ec2.MachineImage.latest_amazon_linux(), vpc=vpc, vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC) # Sensitive ) For aws_cdk.aws_ec2.CfnInstance: from aws_cdk import aws_ec2 as ec2 ec2.CfnInstance( self, "cfn_public_exposed", instance_type="t2.micro", image_id="ami-0ea0f26a6d50850c5", network_interfaces=[ ec2.CfnInstance.NetworkInterfaceProperty( device_index="0", associate_public_ip_address=True, # Sensitive delete_on_termination=True, subnet_id=vpc.select_subnets(subnet_type=ec2.SubnetType.PUBLIC).subnet_ids[0] ) ] ) For aws_cdk.aws_dms.CfnReplicationInstance: from aws_cdk import aws_dms as dms rep_instance = dms.CfnReplicationInstance( self, "explicit_public", replication_instance_class="dms.t2.micro", allocated_storage=5, publicly_accessible=True, # Sensitive replication_subnet_group_identifier=subnet_group.replication_subnet_group_identifier, vpc_security_group_ids=[vpc.vpc_default_security_group] ) For aws_cdk.aws_rds.CfnDBInstance: from aws_cdk import aws_rds as rds from aws_cdk import aws_ec2 as ec2 rds_subnet_group_public = rds.CfnDBSubnetGroup( self, "public_subnet", db_subnet_group_description="Subnets", subnet_ids=vpc.select_subnets( subnet_type=ec2.SubnetType.PUBLIC ).subnet_ids ) rds.CfnDBInstance( self, "public-public-subnet", engine="postgres", master_username="foobar", master_user_password="12345678", db_instance_class="db.r5.large", allocated_storage="200", iops=1000, db_subnet_group_name=rds_subnet_group_public.ref, publicly_accessible=True, # Sensitive vpc_security_groups=[sg.security_group_id] ) Compliant Solutionfrom aws_cdk import aws_ec2 as ec2 ec2.Instance( self, "vpc_subnet_private", instance_type=nano_t2, machine_image=ec2.MachineImage.latest_amazon_linux(), vpc=vpc, vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT) ) For aws_cdk.aws_ec2.CfnInstance: from aws_cdk import aws_ec2 as ec2 ec2.CfnInstance( self, "cfn_private", instance_type="t2.micro", image_id="ami-0ea0f26a6d50850c5", network_interfaces=[ ec2.CfnInstance.NetworkInterfaceProperty( device_index="0", associate_public_ip_address=False, # Compliant delete_on_termination=True, subnet_id=vpc.select_subnets(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT).subnet_ids[0] ) ] ) For aws_cdk.aws_dms.CfnReplicationInstance: from aws_cdk import aws_dms as dms rep_instance = dms.CfnReplicationInstance( self, "explicit_private", replication_instance_class="dms.t2.micro", allocated_storage=5, publicly_accessible=False, replication_subnet_group_identifier=subnet_group.replication_subnet_group_identifier, vpc_security_group_ids=[vpc.vpc_default_security_group] ) For aws_cdk.aws_rds.CfnDBInstance: from aws_cdk import aws_rds as rds from aws_cdk import aws_ec2 as ec2 rds_subnet_group_private = rds.CfnDBSubnetGroup( self, "private_subnet", db_subnet_group_description="Subnets", subnet_ids=vpc.select_subnets( subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT ).subnet_ids ) rds.CfnDBInstance( self, "private-private-subnet", engine="postgres", master_username="foobar", master_user_password="12345678", db_instance_class="db.r5.large", allocated_storage="200", iops=1000, db_subnet_group_name=rds_subnet_group_private.ref, publicly_accessible=False, vpc_security_groups=[sg.security_group_id] ) See
|
||||||||||||
python:S4830 |
This vulnerability makes it possible that an encrypted communication is intercepted. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be. When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. Loss of data integrityWhen TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system. How to fix it in Python Standard LibraryCode examplesThe following code contains examples of disabled certificate validation. Certificate validation is not enabled by default when Noncompliant code exampleimport ssl ctx1 = ssl._create_unverified_context() # Noncompliant ctx2 = ssl._create_stdlib_context() # Noncompliant ctx3 = ssl.create_default_context() ctx3.verify_mode = ssl.CERT_NONE # Noncompliant Compliant solutionimport ssl ctx = ssl.create_default_context() ctx.verify_mode = ssl.CERT_REQUIRED # By default, certificate validation is enabled ctx = ssl._create_default_https_context() How does this work?Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation. To avoid running into problems with invalid certificates, consider the following sections. Using trusted certificatesIf possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration. Working with self-signed certificates or non-standard CAsIn some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store. ResourcesStandards
|
||||||||||||
python:S6321 |
Why is this an issue?Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and
outbound traffic. What is the potential impact?Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system. Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system. How to fix itIt is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers. Code examplesNoncompliant code exampleFor aws_cdk.aws_ec2.Instance and other constructs that
support a from aws_cdk import aws_ec2 as ec2 instance = ec2.Instance( self, "my_instance", instance_type=nano_t2, machine_image=ec2.MachineImage.latest_amazon_linux(), vpc=vpc ) instance.connections.allow_from( ec2.Peer.any_ipv4(), # Noncompliant ec2.Port.tcp(22), description="Allows SSH from all IPv4" ) instance.connections.allow_from_any_ipv4( # Noncompliant ec2.Port.tcp(3389), description="Allows Terminal Server from all IPv4" ) For aws_cdk.aws_ec2.SecurityGroup from aws_cdk import aws_ec2 as ec2 security_group = ec2.SecurityGroup( self, "custom-security-group", vpc=vpc ) security_group.add_ingress_rule( ec2.Peer.any_ipv4(), # Noncompliant ec2.Port.tcp_range(1, 1024) ) For aws_cdk.aws_ec2.CfnSecurityGroup from aws_cdk import aws_ec2 as ec2 ec2.CfnSecurityGroup( self, "cfn-based-security-group", group_description="cfn based security group", group_name="cfn-based-security-group", vpc_id=vpc.vpc_id, security_group_ingress=[ ec2.CfnSecurityGroup.IngressProperty( # Noncompliant ip_protocol="6", cidr_ip="0.0.0.0/0", from_port=22, to_port=22 ), ec2.CfnSecurityGroup.IngressProperty( # Noncompliant ip_protocol="tcp", cidr_ip="0.0.0.0/0", from_port=3389, to_port=3389 ), { # Noncompliant "ipProtocol":"-1", "cidrIpv6":"::/0" } ] ) For aws_cdk.aws_ec2.CfnSecurityGroupIngress from aws_cdk import aws_ec2 as ec2 ec2.CfnSecurityGroupIngress( # Noncompliant self, "ingress-all-ip-tcp-ssh", ip_protocol="tcp", cidr_ip="0.0.0.0/0", from_port=22, to_port=22, group_id=security_group.attr_group_id ) ec2.CfnSecurityGroupIngress( # Noncompliant self, "ingress-all-ipv6-all-tcp", ip_protocol="-1", cidr_ipv6="::/0", group_id=security_group.attr_group_id ) Compliant solutionFor aws_cdk.aws_ec2.Instance and other constructs that
support a from aws_cdk import aws_ec2 as ec2 instance = ec2.Instance( self, "my_instance", instance_type=nano_t2, machine_image=ec2.MachineImage.latest_amazon_linux(), vpc=vpc ) instance.connections.allow_from_any_ipv4( ec2.Port.tcp(1234), description="Allows 1234 from all IPv4" ) instance.connections.allow_from( ec2.Peer.ipv4("192.0.2.0/24"), ec2.Port.tcp(22), description="Allows SSH from all IPv4" ) For aws_cdk.aws_ec2.SecurityGroup from aws_cdk import aws_ec2 as ec2 security_group = ec2.SecurityGroup( self, "custom-security-group", vpc=vpc ) security_group.add_ingress_rule( ec2.Peer.any_ipv4(), ec2.Port.tcp_range(1024, 1048) ) For aws_cdk.aws_ec2.CfnSecurityGroup from aws_cdk import aws_ec2 as ec2 ec2.CfnSecurityGroup( self, "cfn-based-security-group", group_description="cfn based security group", group_name="cfn-based-security-group", vpc_id=vpc.vpc_id, security_group_ingress=[ ec2.CfnSecurityGroup.IngressProperty( ip_protocol="tcp", cidr_ip="0.0.0.0/0", from_port=1024, to_port=1048 ), { "ipProtocol":"6", "cidrIp":"192.0.2.0/24", "fromPort":22, "toPort":22 } ] ) For aws_cdk.aws_ec2.CfnSecurityGroupIngress from aws_cdk import aws_ec2 as ec2 ec2.CfnSecurityGroupIngress( self, "ingress-all-ipv4-tcp-http", ip_protocol="6", cidr_ip="0.0.0.0/0", from_port=80, to_port=80, group_id=security_group.attr_group_id ) ec2.CfnSecurityGroupIngress( self, "ingress-range-tcp-rdp", ip_protocol="tcp", cidr_ip="192.0.2.0/24", from_port=3389, to_port=3389, group_id=security_group.attr_group_id ) ResourcesDocumentation
Standards |
||||||||||||
python:S6333 |
Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure. Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIn general, prefer limiting API access to a specific set of people or entities. AWS provides multiple methods to do so:
Sensitive Code ExampleFor aws_cdk.aws_apigateway.Resource: from aws_cdk import ( aws_apigateway as apigateway ) resource = api.root.add_resource("example") resource.add_method( "GET", authorization_type=apigateway.AuthorizationType.NONE # Sensitive ) For aws_cdk.aws_apigatewayv2.CfnRoute: from aws_cdk import ( aws_apigatewayv2 as apigateway ) apigateway.CfnRoute( self, "no-auth", api_id=api.ref, route_key="GET /test", authorization_type="NONE" # Sensitive ) Compliant SolutionFor aws_cdk.aws_apigateway.Resource: from aws_cdk import ( aws_apigateway as apigateway ) opts = apigateway.MethodOptions( authorization_type=apigateway.AuthorizationType.IAM ) resource = api.root.add_resource( "example", default_method_options=opts ) resource.add_method( "POST", authorization_type=apigateway.AuthorizationType.IAM ) resource.add_method( # authorization_type is inherited from the Resource's configured default_method_options "POST" ) For aws_cdk.aws_apigatewayv2.CfnRoute: from aws_cdk import ( aws_apigatewayv2 as apigateway ) apigateway.CfnRoute( self, "auth", api_id=api.ref, route_key="GET /test", authorization_type="AWS_IAM" ) See
|
||||||||||||
python:S2092 |
When a cookie is protected with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFlask from flask import Response @app.route('/') def index(): response = Response() response.set_cookie('key', 'value') # Sensitive return response Compliant SolutionFlask from flask import Response @app.route('/') def index(): response = Response() response.set_cookie('key', 'value', secure=True) # Compliant return response See
|
||||||||||||
python:S5122 |
Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities: Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleDjango: CORS_ORIGIN_ALLOW_ALL = True # Sensitive Flask: from flask import Flask from flask_cors import CORS app = Flask(__name__) CORS(app, resources={r"/*": {"origins": "*", "send_wildcard": "True"}}) # Sensitive User-controlled origin: origin = request.headers['ORIGIN'] resp = Response() resp.headers['Access-Control-Allow-Origin'] = origin # Sensitive Compliant SolutionDjango: CORS_ORIGIN_ALLOW_ALL = False # Compliant Flask: from flask import Flask from flask_cors import CORS app = Flask(__name__) CORS(app, resources={r"/*": {"origins": "*", "send_wildcard": "False"}}) # Compliant User-controlled origin validated with an allow-list: origin = request.headers['ORIGIN'] resp = Response() if origin in TRUSTED_ORIGINS: resp.headers['Access-Control-Allow-Origin'] = origin See
|
||||||||||||
python:S5247 |
To reduce the risk of cross-site scripting attacks, templating systems, such as Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy
(which only transforms html characters into html entities) will not be relevant
when variables are used in a html attribute because ' <a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie) <a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack) Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesEnable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one. Sensitive Code Examplefrom jinja2 import Environment env = Environment() # Sensitive: New Jinja2 Environment has autoescape set to false env = Environment(autoescape=False) # Sensitive: Compliant Solutionfrom jinja2 import Environment env = Environment(autoescape=True) # Compliant See
|
||||||||||||
python:S6330 |
Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary. Sensitive Code Examplefrom aws_cdk import ( aws_sqs as sqs ) class QueueStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) sqs.Queue( # Sensitive, unencrypted by default self, "example" ) from aws_cdk import ( aws_sqs as sqs ) class CfnQueueStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) sqs.CfnQueue( # Sensitive, unencrypted by default self, "example" ) Compliant Solutionfrom aws_cdk import ( aws_sqs as sqs ) class QueueStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) sqs.Queue( self, "example", encryption=sqs.QueueEncryption.KMS_MANAGED ) from aws_cdk import ( aws_sqs as sqs ) class CfnQueueStack(Stack): def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) my_key = kms.Key(self, "key") sqs.CfnQueue( self, "example", kms_master_key_id=my_key.key_id ) See
|
||||||||||||
python:S6332 |
Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary. Sensitive Code ExampleFor aws_cdk.aws_efs.FileSystem and aws_cdk.aws_efs.CfnFileSystem: from aws_cdk import ( aws_efs as efs ) efs.FileSystem( self, "example", encrypted=False # Sensitive ) Compliant SolutionFor aws_cdk.aws_efs.FileSystem and aws_cdk.aws_efs.CfnFileSystem: from aws_cdk import ( aws_efs as efs ) efs.FileSystem( self, "example", encrypted=True ) See
|
||||||||||||
cloudformation:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in API GatewayCode examplesThese code samples illustrate how to fix this issue in both APIGateway and ApiGatewayV2. Noncompliant code exampleAWSTemplateFormatVersion: '2010-09-09' Resources: CustomApi: Type: AWS::ApiGateway::DomainName Properties: SecurityPolicy: "TLS_1_0" # Noncompliant The ApiGatewayV2 uses a weak TLS version by default: AWSTemplateFormatVersion: '2010-09-09' Resources: CustomApi: # Noncompliant Type: AWS::ApiGatewayV2::DomainName Compliant solutionAWSTemplateFormatVersion: '2010-09-09' Resources: CustomApi: Type: AWS::ApiGateway::DomainName Properties: SecurityPolicy: "TLS_1_2" AWSTemplateFormatVersion: '2010-09-09' Resources: CustomApi: Type: AWS::ApiGatewayV2::DomainName Properties: DomainNameConfigurations: - SecurityPolicy: "TLS_1_2" How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards |
||||||||||||
cloudformation:S6304 |
A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur. Ask Yourself WhetherThe AWS account has more than one resource with different levels of sensitivity. A risk exists if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors. Sensitive Code ExampleUpdate permission is granted for all policies using the wildcard (*) in the AWSTemplateFormatVersion: 2010-09-09 Resources: ExamplePolicy: Type: AWS::IAM::ManagedPolicy Properties: PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - "iam:CreatePolicyVersion" Resource: - "*" # Sensitive Roles: - !Ref MyRole Compliant SolutionRestrict update permission to the appropriate subset of policies: AWSTemplateFormatVersion: 2010-09-09 Resources: ExamplePolicy: Type: AWS::IAM::ManagedPolicy Properties: PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - "iam:CreatePolicyVersion" Resource: - !Sub "arn:aws:iam::${AWS::AccountId}:policy/team1/*" Roles: - !Ref MyRole Exceptions
See
|
||||||||||||
cloudformation:S6327 |
Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary. Sensitive Code ExampleFor AWS::SNS::Topic: AWSTemplateFormatVersion: '2010-09-09' Resources: Topic: # Sensitive, encryption disabled by default Type: AWS::SNS::Topic Properties: DisplayName: "unencrypted_topic" Compliant SolutionFor AWS::SNS::Topic: AWSTemplateFormatVersion: '2010-09-09' Resources: Topic: Type: AWS::SNS::Topic Properties: DisplayName: "encrypted_topic" KmsMasterKeyId: Fn::GetAtt: - TestKey - KeyId See |
||||||||||||
cloudformation:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code ExampleFor AWS Kinesis Data Streams, server-side encryption is disabled by default: AWSTemplateFormatVersion: 2010-09-09 Resources: KinesisStream: # Sensitive Type: AWS::Kinesis::Stream Properties: ShardCount: 1 # No StreamEncryption For Amazon ElastiCache: AWSTemplateFormatVersion: 2010-09-09 Resources: Example: Type: AWS::ElastiCache::ReplicationGroup Properties: ReplicationGroupId: "example" TransitEncryptionEnabled: false # Sensitive For Amazon ECS: AWSTemplateFormatVersion: 2010-09-09 Resources: EcsTask: Type: AWS::ECS::TaskDefinition Properties: Family: "service" Volumes: - Name: "storage" EFSVolumeConfiguration: FilesystemId: !Ref FS TransitEncryption: "DISABLED" # Sensitive For AWS Load Balancer Listeners: AWSTemplateFormatVersion: 2010-09-09 Resources: HTTPlistener: Type: "AWS::ElasticLoadBalancingV2::Listener" Properties: DefaultActions: - Type: "redirect" RedirectConfig: Protocol: "HTTP" Protocol: "HTTP" # Sensitive For Amazon OpenSearch domains: AWSTemplateFormatVersion: 2010-09-09 Resources: Example: Type: AWS::OpenSearchService::Domain Properties: DomainName: example DomainEndpointOptions: EnforceHTTPS: false # Sensitive NodeToNodeEncryptionOptions: Enabled: false # Sensitive For Amazon MSK communications between clients and brokers: AWSTemplateFormatVersion: 2010-09-09 Resources: MSKCluster: Type: 'AWS::MSK::Cluster' Properties: ClusterName: MSKCluster EncryptionInfo: EncryptionInTransit: ClientBroker: TLS_PLAINTEXT # Sensitive InCluster: false # Sensitive Compliant SolutionFor AWS Kinesis Data Streams server-side encryption: AWSTemplateFormatVersion: 2010-09-09 Resources: KinesisStream: Type: AWS::Kinesis::Stream Properties: ShardCount: 1 StreamEncryption: EncryptionType: KMS For Amazon ElastiCache: AWSTemplateFormatVersion: 2010-09-09 Resources: Example: Type: AWS::ElastiCache::ReplicationGroup Properties: ReplicationGroupId: "example" TransitEncryptionEnabled: true For Amazon ECS: AWSTemplateFormatVersion: 2010-09-09 Resources: EcsTask: Type: AWS::ECS::TaskDefinition Properties: Family: "service" Volumes: - Name: "storage" EFSVolumeConfiguration: FilesystemId: !Ref FS TransitEncryption: "ENABLED" For AWS Load Balancer Listeners: AWSTemplateFormatVersion: 2010-09-09 Resources: HTTPlistener: Type: "AWS::ElasticLoadBalancingV2::Listener" Properties: DefaultActions: - Type: "redirect" RedirectConfig: Protocol: "HTTPS" Protocol: "HTTP" For Amazon OpenSearch domains: AWSTemplateFormatVersion: 2010-09-09 Resources: Example: Type: AWS::OpenSearchService::Domain Properties: DomainName: example DomainEndpointOptions: EnforceHTTPS: true NodeToNodeEncryptionOptions: Enabled: true For Amazon MSK communications between clients and brokers, data in transit is encrypted by default,
allowing you to omit writing the AWSTemplateFormatVersion: 2010-09-09 Resources: MSKCluster: Type: 'AWS::MSK::Cluster' Properties: ClusterName: MSKCluster EncryptionInfo: EncryptionInTransit: ClientBroker: TLS InCluster: true See
|
||||||||||||
cloudformation:S6245 |
This rule is deprecated, and will eventually be removed. Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself. There are three SSE options:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys. Sensitive Code ExampleServer-side encryption is not used: AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Sensitive Compliant SolutionServer-side encryption with Amazon S3-Managed Keys is used: AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Compliant Properties: BucketEncryption: ServerSideEncryptionConfiguration: - ServerSideEncryptionByDefault: SSEAlgorithm: AES256 See
|
||||||||||||
cloudformation:S6249 |
By default, S3 buckets can be accessed through HTTP and HTTPs protocols. As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to deny all HTTP requests:
Sensitive Code ExampleNo secure policy is attached to this S3 bucket: AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Sensitive A policy is defined but forces only HTTPs communication for some users: AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Sensitive Properties: BucketName: "mynoncompliantbucket" S3BucketPolicy: Type: 'AWS::S3::BucketPolicy' Properties: Bucket: !Ref S3Bucket PolicyDocument: Version: "2012-10-17" Statement: - Effect: Deny Principal: AWS: # Sensitive: only one principal is forced to use https - 'arn:aws:iam::123456789123:root' Action: "*" Resource: arn:aws:s3:::mynoncompliantbuckets6249/* Condition: Bool: "aws:SecureTransport": false Compliant SolutionA secure policy that denies the use of all HTTP requests: AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Compliant Properties: BucketName: "mycompliantbucket" S3BucketPolicy: Type: 'AWS::S3::BucketPolicy' Properties: Bucket: "mycompliantbucket" PolicyDocument: Version: "2012-10-17" Statement: - Effect: Deny Principal: AWS: "*" # all principals should use https Action: "*" # for any actions Resource: arn:aws:s3:::mycompliantbucket/* # for any resources Condition: Bool: "aws:SecureTransport": false See
|
||||||||||||
cloudformation:S6329 |
Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption. Depending on the component, inbound access from the Internet can be enabled via:
Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident. This decision increases the likelihood of attacks on the organization, such as:
Ask Yourself WhetherThis cloud resource:
There is a risk if you answered no to any of those questions. Recommended Secure Coding PracticesAvoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites. Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components. The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address. Sensitive Code ExampleDMS and EC2 instances have a public IP address assigned to them: AWSTemplateFormatVersion: 2010-09-09 Resources: DMSInstance: Type: AWS::DMS::ReplicationInstance Properties: PubliclyAccessible: true # sensitive, by default it's also set to true EC2Instance: Type: AWS::EC2::Instance Properties: NetworkInterfaces: - AssociatePublicIpAddress: true # sensitive, by default it's also set to true DeviceIndex: "0" Compliant SolutionDMS and EC2 instances doesn’t have a public IP address: AWSTemplateFormatVersion: 2010-09-09 Resources: DMSInstance: Type: AWS::DMS::ReplicationInstance Properties: PubliclyAccessible: false EC2Instance: Type: AWS::EC2::Instance Properties: NetworkInterfaces: - AssociatePublicIpAddress: false DeviceIndex: "0" See
|
||||||||||||
cloudformation:S6265 |
Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users. The following canned ACLs are security-sensitive:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege policy, ie to grant necessary permissions only to users for their required tasks. In the context
of canned ACL, set it to Sensitive Code ExampleAll users (ie: anyone in the world authenticated or not) have read and write permissions with the AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Sensitive Properties: BucketName: "mynoncompliantbucket" AccessControl: "PublicReadWrite" Compliant SolutionWith the AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Compliant Properties: BucketName: "mycompliantbucket" AccessControl: "Private" See
|
||||||||||||
cloudformation:S6281 |
By default S3 buckets are private, it means that only the bucket owner can access it. This access control can be relaxed with ACLs or policies. To prevent permissive policies to be set on a S3 bucket the following settings can be configured:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to configure:
Sensitive Code ExampleBy default, when not set, the AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucketdefault: Type: 'AWS::S3::Bucket' # Sensitive Properties: BucketName: "example" This AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Sensitive Properties: BucketName: "example" PublicAccessBlockConfiguration: BlockPublicAcls: false # should be true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true Compliant SolutionThis AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Compliant Properties: BucketName: "example" PublicAccessBlockConfiguration: BlockPublicAcls: true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true See
|
||||||||||||
cloudformation:S6302 |
A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information. Ask Yourself WhetherIdentities obtaining all the permissions:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used. Sensitive Code ExampleA customer-managed policy that grants all permissions by using the wildcard (*) in the AWSTemplateFormatVersion: 2010-09-09 Resources: ExamplePolicy: Type: AWS::IAM::ManagedPolicy Properties: PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - "*" # Sensitive Resource: - !Ref MyResource Roles: - !Ref MyRole Compliant SolutionA customer-managed policy that grants only the required permissions: AWSTemplateFormatVersion: 2010-09-09 Resources: ExamplePolicy: Type: AWS::IAM::ManagedPolicy Properties: PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - "s3:GetObject" Resource: - !Ref MyResource Roles: - !Ref MyRole See
|
||||||||||||
cloudformation:S6303 |
Using unencrypted RDS DB resources exposes data to unauthorized access. This situation can occur in a variety of scenarios, such as:
After a successful intrusion, the underlying applications are exposed to:
AWS-managed encryption at rest reduces this risk with a simple switch. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine. Sensitive Code ExampleFor AWS::RDS::DBInstance and AWS::RDS::DBCluster: AWSTemplateFormatVersion: '2010-09-09' Resources: DatabaseInstance: Type: AWS::RDS::DBInstance Properties: StorageEncrypted: false # Sensitive, disabled by default DatabaseCluster: Type: AWS::RDS:DBCluster Properties: StorageEncrypted: false # Sensitive, disabled by default Compliant SolutionFor AWS::RDS::DBInstance and AWS::RDS::DBCluster: AWSTemplateFormatVersion: '2010-09-09' Resources: DatabaseInstance: Type: AWS::RDS::DBInstance Properties: StorageEncrypted: true DatabaseCluster: Type: AWS::RDS:DBCluster Properties: StorageEncrypted: false # Sensitive, disabled by default See
|
||||||||||||
cloudformation:S6308 |
Amazon Elasticsearch Service (ES) is a managed service to host Elasticsearch instances. To harden domain (cluster) data in case of unauthorized access, ES provides data-at-rest encryption if the Elasticsearch version is 5.1 or above. Enabling encryption at rest will help protect:
Thus, if adversaries gain physical access to the storage medium, they cannot access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to encrypt Elasticsearch domains that contain sensitive information. Encryption and decryption are handled transparently by ES, so no further modifications to the application are necessary. Sensitive Code ExampleFor AWS::Elasticsearch::Domain: AWSTemplateFormatVersion: '2010-09-09' Resources: Elasticsearch: Type: AWS::Elasticsearch::Domain Properties: EncryptionAtRestOptions: Enabled: false # Sensitive, disabled by default Compliant SolutionFor AWS::Elasticsearch::Domain: AWSTemplateFormatVersion: '2010-09-09' Resources: Elasticsearch: Type: AWS::Elasticsearch::Domain Properties: EncryptionAtRestOptions: Enabled: true See
|
||||||||||||
cloudformation:S6317 |
Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access. Why is this an issue?AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources. For such policies, it is easy to define very broad permissions (by using wildcard If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities. What is the potential impact?AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope. Privilege escalationWhen IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities. For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account. How to fix it in Identity and Access ManagementCode examplesIn this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges. Noncompliant code exampleAWSTemplateFormatVersion: 2010-09-09 Resources: # Update Lambda code lambdaUpdatePolicy: # Noncompliant Type: AWS::IAM::ManagedPolicy Properties: ManagedPolicyName: lambdaUpdatePolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - lambda:UpdateFunctionCode Resource: "*" Compliant solutionThe policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed. AWSTemplateFormatVersion: 2010-09-09 Resources: # Update Lambda code lambdaUpdatePolicy: Type: AWS::IAM::ManagedPolicy Properties: ManagedPolicyName: lambdaUpdatePolicy PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - lambda:UpdateFunctionCode Resource: "arn:aws:lambda:us-east-2:123456789012:function:my-function:1" How does this work?Principle of least privilegeWhen creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else. To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used. ResourcesDocumentation
Articles & blog posts
Standards |
||||||||||||
cloudformation:S6321 |
Why is this an issue?Cloud platforms such as AWS support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound
traffic. What is the potential impact?Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system. Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system. How to fix itIt is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers. Code examplesNoncompliant code exampleAWSTemplateFormatVersion: 2010-09-09 Resources: ExampleSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: VpcId: !Ref ExampleVpc SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 # SSH traffic CidrIp: "0.0.0.0/0" # from all IP addresses is authorized Compliant solutionAWSTemplateFormatVersion: 2010-09-09 Resources: ExampleSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: VpcId: !Ref ExampleVpc SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: "1.2.3.0/24" ResourcesDocumentation
Standards |
||||||||||||
cloudformation:S6333 |
Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure. Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIn general, prefer limiting API access to a specific set of people or entities. AWS provides multiple methods to do so:
Sensitive Code ExampleA public API that doesn’t have access control implemented: AWSTemplateFormatVersion: 2010-09-09 Resources: ExampleMethod: Type: AWS::ApiGateway::Method Properties: AuthorizationType: NONE # Sensitive HttpMethod: GET A Serverless Application Model (SAM) API resource that is public by default: AWSTemplateFormatVersion: 2010-09-09 Resources: ExampleApi: # Sensitive Type: AWS::Serverless::Api Properties: StageName: Prod Compliant SolutionAn API that implements AWS IAM permissions: AWSTemplateFormatVersion: 2010-09-09 Resources: ExampleMethod: Type: AWS::ApiGateway::Method Properties: AuthorizationType: AWS_IAM HttpMethod: GET A Serverless Application Model (SAM) API resource that has to be requested using a key: AWSTemplateFormatVersion: 2010-09-09 Resources: ExampleApi: Type: AWS::Serverless::Api Properties: StageName: Prod Auth: ApiKeyRequired: true See
|
||||||||||||
cloudformation:S6364 |
Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident. Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident. Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIncrease the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident. Sensitive Code ExampleFor Amazon Relational Database Service clusters and instances: AWSTemplateFormatVersion: 2010-09-09 Resources: relationaldatabase: Type: 'AWS::RDS::DBInstance' Properties: DBName: NonCompliantDatabase BackupRetentionPeriod: 2 # Sensitive Compliant SolutionFor Amazon Relational Database Service clusters and instances: AWSTemplateFormatVersion: 2010-09-09 Resources: relationaldatabase: Type: 'AWS::RDS::DBInstance' Properties: DBName: CompliantDatabase BackupRetentionPeriod: 5 |
||||||||||||
cloudformation:S6252 |
S3 buckets can be in three states related to versioning:
When the S3 bucket is unversioned or has versioning suspended it means that a new version of an object overwrites an existing one in the S3 bucket. It can lead to unintentional or intentional information loss. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object. Sensitive Code ExampleVersioning is disabled by default: AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Sensitive Properties: BucketName: "Example" Compliant SolutionVersioning is enabled: AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Compliant Properties: BucketName: "Example" VersioningConfiguration: Status: Enabled See
|
||||||||||||
cloudformation:S6258 |
Disabling logging of this component can lead to missing traceability in case of a security incident. Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions. Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesEnable the logging capabilities of this component. Depending on the component, new permissions might be required by the logging storage
components. Sensitive Code ExampleFor Amazon S3 access requests: AWSTemplateFormatVersion: 2010-09-09 Resources: S3Bucket: Type: 'AWS::S3::Bucket' # Sensitive Properties: BucketName: "mynoncompliantbucket" For Amazon API Gateway stages: AWSTemplateFormatVersion: 2010-09-09 Resources: Prod: # Sensitive Type: AWS::ApiGateway::Stage Properties: StageName: Prod Description: Prod Stage TracingEnabled: false # Sensitive For Amazon Neptune clusters: AWSTemplateFormatVersion: 2010-09-09 Resources: Cluster: Type: AWS::Neptune::DBCluster Properties: EnableCloudwatchLogsExports: [] # Sensitive For Amazon MSK broker logs: AWSTemplateFormatVersion: 2010-09-09 Resources: SensitiveCluster: Type: 'AWS::MSK::Cluster' Properties: ClusterName: Sensitive Cluster LoggingInfo: BrokerLogs: # Sensitive CloudWatchLogs: Enabled: false LogGroup: CWLG Firehose: DeliveryStream: DS Enabled: false For Amazon DocDB: AWSTemplateFormatVersion: "2010-09-09" Resources: DocDBOmittingLogs: # Sensitive Type: "AWS::DocDB::DBCluster" Properties: DBClusterIdentifier : "DB Without Logs" For Amazon MQ: AWSTemplateFormatVersion: 2010-09-09 Resources: Broker: Type: AWS::AmazonMQ::Broker Properties: Logs: # Sensitive Audit: false General: false For Amazon Redshift: AWSTemplateFormatVersion: 2010-09-09 Resources: ClusterOmittingLogging: # Sensitive Type: "AWS::Redshift::Cluster" Properties: DBName: "Redshift Warehouse Cluster" For Amazon OpenSearch service or Amazon Elasticsearch service: AWSTemplateFormatVersion: '2010-09-09' Resources: OpenSearchServiceDomain: Type: 'AWS::OpenSearchService::Domain' Properties: LogPublishingOptions: # Sensitive ES_APPLICATION_LOGS: CloudWatchLogsLogGroupArn: 'arn:aws:logs:us-east-1:1234:log-group:es-application-logs' Enabled: true INDEX_SLOW_LOGS: CloudWatchLogsLogGroupArn: 'arn:aws:logs:us-east-1:1234:log-group:es-index-slow-logs' Enabled: true For Amazon CloudFront distributions: AWSTemplateFormatVersion: 2010-09-09 Resources: CloudFrontDistribution: # Sensitive Type: AWS::CloudFront::Distribution Properties: DistributionConfig: DefaultRootObject: "index.html" For Amazon Elastic Load Balancing: AWSTemplateFormatVersion: 2010-09-09 Resources: LoadBalancer: Type: AWS::ElasticLoadBalancing::LoadBalancer Properties: AccessLoggingPolicy: Enabled: false # Sensitive For Amazon Load Balancing (v2): AWSTemplateFormatVersion: 2010-09-09 Resources: ApplicationLoadBalancer: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: CompliantLoadBalancer LoadBalancerAttributes: - Key: "access_logs.s3.enabled" Value: false # Sensitive Compliant SolutionFor Amazon S3 access requests: AWSTemplateFormatVersion: 2010-09-09 Resources: S3BucketLogs: Type: 'AWS::S3::Bucket' Properties: BucketName: "mycompliantloggingbucket" AccessControl: LogDeliveryWrite S3Bucket: Type: 'AWS::S3::Bucket' Properties: BucketName: "mycompliantbucket" LoggingConfiguration: DestinationBucketName: !Ref S3BucketLogs LogFilePrefix: testing-logs For Amazon API Gateway stages: AWSTemplateFormatVersion: 2010-09-09 Resources: Prod: Type: AWS::ApiGateway::Stage Properties: StageName: Prod Description: Prod Stage TracingEnabled: true AccessLogSetting: DestinationArn: "arn:aws:logs:eu-west-1:123456789:test" Format: "..." For Amazon Neptune clusters: AWSTemplateFormatVersion: 2010-09-09 Resources: Cluster: Type: AWS::Neptune::DBCluster Properties: EnableCloudwatchLogsExports: ["audit"] For Amazon MSK broker logs: AWSTemplateFormatVersion: 2010-09-09 Resources: SensitiveCluster: Type: 'AWS::MSK::Cluster' Properties: ClusterName: Sensitive Cluster LoggingInfo: BrokerLogs: Firehose: DeliveryStream: DS Enabled: true S3: Bucket: Broker Logs Enabled: true Prefix: "logs/msk-brokers-" For Amazon DocDB: AWSTemplateFormatVersion: "2010-09-09" Resources: DocDBWithLogs: Type: "AWS::DocDB::DBCluster" Properties: DBClusterIdentifier : "DB With Logs" EnableCloudwatchLogsExports: - audit For Amazon MQ enable AWSTemplateFormatVersion: 2010-09-09 Resources: Broker: Type: AWS::AmazonMQ::Broker Properties: Logs: Audit: true General: true For Amazon Redshift: AWSTemplateFormatVersion: 2010-09-09 Resources: CompliantCluster: Type: "AWS::Redshift::Cluster" Properties: DBName: "Redshift Warehouse Cluster" LoggingProperties: BucketName: "Infra Logs" S3KeyPrefix: "log/redshift-" For Amazon OpenSearch service, or Amazon Elasticsearch service: AWSTemplateFormatVersion: '2010-09-09' Resources: OpenSearchServiceDomain: Type: 'AWS::OpenSearchService::Domain' Properties: LogPublishingOptions: AUDIT_LOGS: CloudWatchLogsLogGroupArn: 'arn:aws:logs:us-east-1:1234:log-group:es-audit-logs' Enabled: true For Amazon CloudFront distributions: AWSTemplateFormatVersion: 2010-09-09 Resources: CloudFrontDistribution: Type: AWS::CloudFront::Distribution Properties: DistributionConfig: DefaultRootObject: "index.html" Logging: Bucket: "mycompliantbucket" Prefix: "log/cloudfront-" For Amazon Elastic Load Balancing: AWSTemplateFormatVersion: 2010-09-09 Resources: LoadBalancer: Type: AWS::ElasticLoadBalancing::LoadBalancer Properties: AccessLoggingPolicy: Enabled: true S3BucketName: mycompliantbucket S3BucketPrefix: "log/loadbalancer-" For Amazon Load Balancing (v2): AWSTemplateFormatVersion: 2010-09-09 Resources: ApplicationLoadBalancer: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: CompliantLoadBalancer LoadBalancerAttributes: - Key: "access_logs.s3.enabled" Value: true - Key: "access_logs.s3.bucket" Value: "mycompliantbucket" - Key: "access_logs.s3.prefix" Value: "log/elbv2-" See
|
||||||||||||
cloudformation:S6270 |
Resource-based policies granting access to all users can lead to information leakage. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges. Sensitive Code ExampleThis policy allows all users, including anonymous ones, to access an S3 bucket: AWSTemplateFormatVersion: 2010-09-09 Resources: S3BucketPolicy: Type: 'AWS::S3::BucketPolicy' # Sensitive Properties: Bucket: !Ref S3Bucket PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: AWS: "*" # all principals / anonymous access Action: "s3:PutObject" # can put object Resource: arn:aws:s3:::mybucket/* Compliant SolutionThis policy allows only the authorized users: AWSTemplateFormatVersion: 2010-09-09 Resources: S3BucketPolicy: Type: 'AWS::S3::BucketPolicy' # Compliant Properties: Bucket: !Ref S3Bucket PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: AWS: - !Sub 'arn:aws:iam::${AWS::AccountId}:root' # only this principal Action: "s3:PutObject" # can put object Resource: arn:aws:s3:::mybucket/* See
|
||||||||||||
cloudformation:S6275 |
Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade. Sensitive Code ExampleFor AWS::EC2::Volume: AWSTemplateFormatVersion: '2010-09-09' Resources: Ec2Volume: Type: AWS::EC2::Volume Properties: Encrypted: false # Sensitive AWSTemplateFormatVersion: '2010-09-09' Resources: Ec2Volume: Type: AWS::EC2::Volume # Sensitive as encryption is disabled by default Compliant SolutionFor AWS::EC2::Volume: AWSTemplateFormatVersion: '2010-09-09' Resources: Ec2Volume: Type: AWS::EC2::Volume Properties: Encrypted: true See |
||||||||||||
cloudformation:S6319 |
Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary. Sensitive Code ExampleFor AWS::SageMaker::NotebookInstance: AWSTemplateFormatVersion: '2010-09-09' Resources: Notebook: # Sensitive, encryption disabled by default Type: AWS::SageMaker::NotebookInstance Compliant SolutionFor AWS::SageMaker::NotebookInstance: AWSTemplateFormatVersion: '2010-09-09' Resources: Notebook: Type: AWS::SageMaker::NotebookInstance Properties: KmsKeyId: Fn::GetAtt: - SomeKey - KeyId See |
||||||||||||
cloudformation:S6330 |
Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary. Sensitive Code ExampleFor AWS::SQS::Queue: AWSTemplateFormatVersion: '2010-09-09' Resources: Queue: # Sensitive, encryption disabled by default Type: AWS::SQS::Queue Properties: DisplayName: "unencrypted_queue" Compliant SolutionFor AWS::SQS::Queue: AWSTemplateFormatVersion: '2010-09-09' Resources: Queue: Type: AWS::SQS::Queue Properties: DisplayName: "encrypted_queue" KmsMasterKeyId: Fn::GetAtt: - TestKey - KeyId See
|
||||||||||||
cloudformation:S6332 |
Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary. Sensitive Code ExampleFor AWS::EFS::FileSystem: AWSTemplateFormatVersion: '2010-09-09' Resources: Fs: # Sensitive, encryption disabled by default Type: AWS::EFS::FileSystem Compliant SolutionFor AWS::EFS::FileSystem: AWSTemplateFormatVersion: '2010-09-09' Resources: Fs: Type: AWS::EFS::FileSystem Properties: Encrypted: true See
|
||||||||||||
vbnet:S3329 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV). If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, a company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in .NETCode examplesNoncompliant code exampleImports System.IO Imports System.Security.Cryptography Public Sub Encrypt(key As Byte(), dataToEncrypt As Byte(), target As MemoryStream) Dim aes = New AesCryptoServiceProvider() Dim iv = New Byte() {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16} Dim encryptor = aes.CreateEncryptor(key, iv) ' Noncompliant Dim cryptoStream = New CryptoStream(target, encryptor, CryptoStreamMode.Write) Dim swEncrypt = New StreamWriter(cryptoStream) swEncrypt.Write(dataToEncrypt) End Sub Compliant solutionIn this example, the code implicitly uses a number generator that is considered strong, thanks to Imports System.IO Imports System.Security.Cryptography Public Sub Encrypt(key As Byte(), dataToEncrypt As Byte(), target As MemoryStream) Dim aes = New AesCryptoServiceProvider() Dim encryptor = aes.CreateEncryptor(key, aes.IV) Dim cryptoStream = New CryptoStream(target, encryptor, CryptoStreamMode.Write) Dim swEncrypt = New StreamWriter(cryptoStream) swEncrypt.Write(dataToEncrypt) End Sub How does this work?Use unique IVsTo ensure high security, initialization vectors must meet two important criteria:
The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext. In the previous non-compliant example, the problem is not that the IV is hard-coded. ResourcesStandards
|
||||||||||||
vbnet:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on production servers. The .Net Core framework offers multiple features which help during debug.
Use Sensitive Code ExampleThis rule raises issues when the following .Net Core methods are called:
Imports Microsoft.AspNetCore.Builder Imports Microsoft.AspNetCore.Hosting Namespace MyMvcApp Public Class Startup Public Sub Configure(ByVal app As IApplicationBuilder, ByVal env As IHostingEnvironment) ' Those calls are Sensitive because it seems that they will run in production app.UseDeveloperExceptionPage() 'Sensitive app.UseDatabaseErrorPage() 'Sensitive End Sub End Class End Namespace Compliant SolutionImports Microsoft.AspNetCore.Builder Imports Microsoft.AspNetCore.Hosting Namespace MyMvcApp Public Class Startup Public Sub Configure(ByVal app As IApplicationBuilder, ByVal env As IHostingEnvironment) If env.IsDevelopment() Then ' Compliant ' The following calls are ok because they are disabled in production app.UseDeveloperExceptionPage() app.UseDatabaseErrorPage() End If End Sub End Class End Namespace See |
||||||||||||
vbnet:S5042 |
Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes). Ask Yourself WhetherArchives to expand are untrusted and:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFor Each entry As ZipArchiveEntry in archive.Entries ' entry.FullName could contain parent directory references ".." and the destinationPath variable could become outside of the desired path string destinationPath = Path.GetFullPath(Path.Combine(path, entry.FullName)) entry.ExtractToFile(destinationPath) ' Sensitive, extracts the entry to a file Dim stream As Stream stream = entry.Open() ' Sensitive, the entry is about to be extracted Next Compliant SolutionConst ThresholdRatio As Double = 10 Const ThresholdSize As Integer = 1024 * 1024 * 1024 ' 1 GB Const ThresholdEntries As Integer = 10000 Dim TotalSizeArchive, TotalEntryArchive, TotalEntrySize, Cnt As Integer Dim Buffer(1023) As Byte Using ZipToOpen As New FileStream("ZipBomb.zip", FileMode.Open), Archive As New ZipArchive(ZipToOpen, ZipArchiveMode.Read) For Each Entry As ZipArchiveEntry In Archive.Entries Using s As Stream = Entry.Open TotalEntryArchive += 1 TotalEntrySize = 0 Do Cnt = s.Read(Buffer, 0, Buffer.Length) TotalEntrySize += Cnt TotalSizeArchive += Cnt If TotalEntrySize / Entry.CompressedLength > ThresholdRatio Then Exit Do ' Ratio between compressed And uncompressed data Is highly suspicious, looks Like a Zip Bomb Attack Loop While Cnt > 0 End Using If TotalSizeArchive > ThresholdSize Then Exit For ' The uncompressed data size Is too much for the application resource capacity If TotalEntryArchive > ThresholdEntries Then Exit For ' Too much entries in this archive, can lead to inodes exhaustion of the system Next End Using See
|
||||||||||||
vbnet:S5542 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext. Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution. For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in .NETCode examplesNoncompliant code exampleExample with a symmetric cipher, AES: Imports System.Security.Cryptography Public Module Example Public Sub Encrypt() Dim Algorithm As New AesManaged() With { .KeySize = 128, .BlockSize = 128, .Mode = CipherMode.ECB, ' Noncompliant .Padding = PaddingMode.PKCS7 } End Sub End Module Example with an asymmetric cipher, RSA: Imports System.Security.Cryptography Public Module Example Public Sub Encrypt() Dim data(10) As Byte Dim RsaCsp = New RSACryptoServiceProvider() RsaCsp.Encrypt(data, False) ' Noncompliant End Sub End Module Compliant solutionFor the AES symmetric cipher, use the GCM mode: Imports System.Security.Cryptography Public Module Example Public Sub Encrypt() Dim data(10) As Byte Dim Algorithm As New AesGcm(data) End Sub End Module For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP): Imports System.Security.Cryptography Public Module Example Public Sub Encrypt() Dim data(10) As Byte Dim RsaCsp = New RSACryptoServiceProvider() RsaCsp.Encrypt(data, True) ' Noncompliant End Sub End Module How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. Appropriate choices are currently the following. For AES: use authenticated encryption modesThe best-known authenticated encryption mode for AES is Galois/Counter mode (GCM). GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data. Other similar modes are:
It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead. For RSA: use the OAEP schemeThe Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA. ResourcesArticles & blog posts
Standards |
||||||||||||
vbnet:S5547 |
This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in .NETCode examplesThe following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided. Noncompliant code exampleImports System.Security.Cryptography Public Sub Encrypt() Dim SimpleDES As New DESCryptoServiceProvider() ' Noncompliant End Sub Compliant solutionImports System.Security.Cryptography Public Sub Encrypt() Dim AES128ECB = Aes.Create() End Sub How does this work?Use a secure algorithmIt is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES). For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits. ResourcesStandards |
||||||||||||
vbnet:S5659 |
This vulnerability allows forging of JSON Web Tokens to impersonate other users. Why is this an issue?JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature. What is the potential impact?When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities. Impersonation of usersJWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data. Unauthorized data accessWhen a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access. How to fix it in Jwt.NetCode examplesThe following code contains examples of JWT encoding and decoding without a strong cipher algorithm. Noncompliant code exampleImports JWT Public Sub Decode(decoder AS IJwtDecoder) Dim decoded As String = decoder.Decode(token, secret, verify:= false) ' Noncompliant End Sub Imports JWT Public Sub Decode() Dim decoded As String = new JwtBuilder() .WithSecret(secret) .Decode(token) ' Noncompliant End Sub Compliant solutionImports JWT Public Sub Decode(decoder AS IJwtDecoder) Dim decoded As String = decoder.Decode(token, secret, verify:= true) End Sub When using Imports JWT Public Sub Decode() Dim decoded As String = new JwtBuilder() .WithSecret(secret) .MustVerifySignature() .Decode(token) End Sub How does this work?Verify the signature of your tokensResolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose. Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked. To resolve the issue, follow these instructions:
By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process. Going the extra mileSecurely store your secret keysEnsure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services. Rotate your secret keysEven with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions. ResourcesStandards |
||||||||||||
vbnet:S5773 |
Deserialization is the process of converting serialized data (such as objects or data structures) back into their original form. Types allowed to be unserialized should be strictly controlled. Why is this an issue?During the deserialization process, the state of an object will be reconstructed from the serialized data stream. By allowing unrestricted deserialization of types, the application makes it possible for attackers to use types with dangerous or otherwise sensitive behavior during the deserialization process. What is the potential impact?When an application deserializes untrusted data without proper restrictions, an attacker can craft malicious serialized objects. Depending on the affected objects and properties, the consequences can vary. Remote Code ExecutionIf attackers can craft malicious serialized objects that contain executable code, this code will run within the application’s context, potentially gaining full control over the system. This can lead to unauthorized access, data breaches, or even complete system compromise. For example, a well-known attack vector consists in serializing an object of type Privilege escalationUnrestricted deserialization can also enable attackers to escalate their privileges within the application. By manipulating the serialized data, an attacker can modify object properties or bypass security checks, granting them elevated privileges that they should not have. This can result in unauthorized access to sensitive data, unauthorized actions, or even administrative control over the application. Denial of ServiceIn some cases, an attacker can abuse the deserialization process to cause a denial of service (DoS) condition. By providing specially crafted serialized data, the attacker can trigger excessive resource consumption, leading to system instability or unresponsiveness. This can disrupt the availability of the application, impacting its functionality and causing inconvenience to users. How to fix itCode examplesNoncompliant code exampleWith Dim myBinaryFormatter = New BinaryFormatter() myBinaryFormatter.Deserialize(stream) ' Noncompliant With Dim serializer1 As JavaScriptSerializer = New JavaScriptSerializer(New SimpleTypeResolver()) ' Noncompliant: SimpleTypeResolver is insecure (every type is resolved) serializer1.Deserialize(Of ExpectedType)(json) Compliant solutionWith NotInheritable Class CustomBinder Inherits SerializationBinder Public Overrides Function BindToType(assemblyName As String, typeName As String) As Type If Not (Equals(typeName, "type1") OrElse Equals(typeName, "type2") OrElse Equals(typeName, "type3")) Then Throw New SerializationException("Only type1, type2 and type3 are allowed") End If Return Assembly.Load(assemblyName).[GetType](typeName) End Function End Class Dim myBinaryFormatter = New BinaryFormatter() myBinaryFormatter.Binder = New CustomBinder() myBinaryFormatter.Deserialize(stream) With Public Class CustomSafeTypeResolver Inherits JavaScriptTypeResolver Public Overrides Function ResolveType(id As String) As Type If Not Equals(id, "ExpectedType") Then Throw New ArgumentNullException("Only ExpectedType is allowed during deserialization") End If Return Type.[GetType](id) End Function End Class Dim serializer As JavaScriptSerializer = New JavaScriptSerializer(New CustomSafeTypeResolver()) serializer.Deserialize(Of ExpectedType)(json) Going the extra mileInstead of using If it’s not possible then try to mitigate the risk by restricting the types allowed to be deserialized:
ResourcesDocumentation
Articles & blog posts
Standards |
||||||||||||
vbnet:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in .NETCode examplesNoncompliant code exampleThese samples use a default TLS algorithm, which is a weak cryptographical algorithm: TLSv1.0. Imports System.Net Imports System.Security.Authentication Public Sub Encrypt() ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls ' Noncompliant End Sub Imports System.Net.Http Imports System.Security.Authentication Public Sub Encrypt() Dim Handler As New HttpClientHandler With { .SslProtocols = SslProtocols.Tls ' Noncompliant } End Sub Compliant solutionImports System.Net Imports System.Security.Authentication Public Sub Encrypt() ServicePointManager.SecurityProtocol = _ SecurityProtocolType.Tls12 _ Or SecurityProtocolType.Tls13 End Sub Imports System.Net.Http Imports System.Security.Authentication Public Sub Encrypt() Dim Handler As New HttpClientHandler With { .SslProtocols = SslProtocols.Tls12 } End Sub How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards
|
||||||||||||
vbnet:S5753 |
ASP.NET 1.1+ comes with a feature called Request Validation, preventing the server to accept content containing un-encoded HTML. This feature comes as a first protection layer against Cross-Site Scripting (XSS) attacks and act as a simple Web Application Firewall (WAF) rejecting requests potentially containing malicious content. While this feature is not a silver bullet to prevent all XSS attacks, it helps to catch basic ones. It will for example prevent Note: Request Validation feature being only available for ASP.NET, no Security Hotspot is raised on ASP.NET Core applications. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleAt Controller level: <ValidateInput(False)> Public Function Welcome(Name As String) As ActionResult ... End Function At application level, configured in the Web.config file: <configuration> <system.web> <pages validateRequest="false" /> ... <httpRuntime requestValidationMode="0.0" /> </system.web> </configuration> Compliant SolutionAt Controller level: <ValidateInput(True)> Public Function Welcome(Name As String) As ActionResult ... End Function or Public Function Welcome(Name As String) As ActionResult ... End Function At application level, configured in the Web.config file: <configuration> <system.web> <pages validateRequest="true" /> ... <httpRuntime requestValidationMode="4.5" /> </system.web> </configuration> See
|
||||||||||||
vbnet:S2257 |
The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has
been protected. Standard algorithms like This rule tracks custom implementation of these types from
Recommended Secure Coding Practices
Sensitive Code ExamplePublic Class CustomHash ' Noncompliant Inherits HashAlgorithm Private fResult() As Byte Public Overrides Sub Initialize() fResult = Nothing End Sub Protected Overrides Function HashFinal() As Byte() Return fResult End Function Protected Overrides Sub HashCore(array() As Byte, ibStart As Integer, cbSize As Integer) fResult = If(fResult, array.Take(8).ToArray) End Sub End Class Compliant SolutionDim mySHA256 As SHA256 = SHA256.Create() See
|
||||||||||||
vbnet:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleDim username As String = "admin" Dim password As String = "Password123" ' Sensitive Dim usernamePassword As String = "user=admin&password=Password123" ' Sensitive Dim url As String = "scheme://user:Admin123@domain.com" ' Sensitive Compliant SolutionDim username As String = "admin" Dim password As String = GetEncryptedPassword() Dim usernamePassword As String = String.Format("user={0}&password={1}", GetEncryptedUsername(), GetEncryptedPassword()) Dim url As String = $"scheme://{username}:{password}@domain.com" Dim url2 As String= "http://guest:guest@domain.com" ' Compliant Const Password_Property As String = "custom.password" ' Compliant Exceptions
See
|
||||||||||||
vbnet:S4790 |
Cryptographic hash algorithms such as Ask Yourself WhetherThe hashed value is used in a security context like:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSafer alternatives, such as Sensitive Code ExampleImports System.Security.Cryptography Sub ComputeHash() ' Review all instantiations of classes that inherit from HashAlgorithm, for example: Dim hashAlgo As HashAlgorithm = HashAlgorithm.Create() ' Sensitive Dim hashAlgo2 As HashAlgorithm = HashAlgorithm.Create("SHA1") ' Sensitive Dim sha As SHA1 = New SHA1CryptoServiceProvider() ' Sensitive Dim md5 As MD5 = New MD5CryptoServiceProvider() ' Sensitive ' ... End Sub Class MyHashAlgorithm Inherits HashAlgorithm ' Sensitive ' ... End Class Compliant SolutionImports System.Security.Cryptography Sub ComputeHash() Dim sha256 = New SHA256CryptoServiceProvider() ' Compliant Dim sha384 = New SHA384CryptoServiceProvider() ' Compliant Dim sha512 = New SHA512CryptoServiceProvider() ' Compliant ' ... End Sub See
|
||||||||||||
vbnet:S4792 |
This rule is deprecated, and will eventually be removed. Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities: Logs are useful before, during and after a security incident.
Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged. This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:
Sensitive Code Example.Net Core: configure programmatically Imports System Imports System.Collections Imports System.Collections.Generic Imports Microsoft.AspNetCore Imports Microsoft.AspNetCore.Builder Imports Microsoft.AspNetCore.Hosting Imports Microsoft.Extensions.Configuration Imports Microsoft.Extensions.DependencyInjection Imports Microsoft.Extensions.Logging Imports Microsoft.Extensions.Options Namespace MvcApp Public Class ProgramLogging Public Shared Function CreateWebHostBuilder(args As String()) As IWebHostBuilder WebHost.CreateDefaultBuilder(args) _ .ConfigureLogging(Function(hostingContext, Logging) ' Sensitive ' ... End Function) _ .UseStartup(Of StartupLogging)() '... End Function End Class Public Class StartupLogging Public Sub ConfigureServices(services As IServiceCollection) services.AddLogging(Function(logging) ' Sensitive '... End Function) End Sub Public Sub Configure(app As IApplicationBuilder, env As IHostingEnvironment, loggerFactory As ILoggerFactory) Dim config As IConfiguration = Nothing Dim level As LogLevel = LogLevel.Critical Dim includeScopes As Boolean = False Dim filter As Func(Of String, Microsoft.Extensions.Logging.LogLevel, Boolean) = Nothing Dim consoleSettings As Microsoft.Extensions.Logging.Console.IConsoleLoggerSettings = Nothing Dim azureSettings As Microsoft.Extensions.Logging.AzureAppServices.AzureAppServicesDiagnosticsSettings = Nothing Dim eventLogSettings As Microsoft.Extensions.Logging.EventLog.EventLogSettings = Nothing ' An issue will be raised for each call to an ILoggerFactory extension methods adding loggers. loggerFactory.AddAzureWebAppDiagnostics() ' Sensitive loggerFactory.AddAzureWebAppDiagnostics(azureSettings) ' Sensitive loggerFactory.AddConsole() ' Sensitive loggerFactory.AddConsole(level) ' Sensitive loggerFactory.AddConsole(level, includeScopes) ' Sensitive loggerFactory.AddConsole(filter) ' Sensitive loggerFactory.AddConsole(filter, includeScopes) ' Sensitive loggerFactory.AddConsole(config) ' Sensitive loggerFactory.AddConsole(consoleSettings) ' Sensitive loggerFactory.AddDebug() ' Sensitive loggerFactory.AddDebug(level) ' Sensitive loggerFactory.AddDebug(filter) ' Sensitive loggerFactory.AddEventLog() ' Sensitive loggerFactory.AddEventLog(eventLogSettings) ' Sensitive loggerFactory.AddEventLog(level) ' Sensitive ' Only available for NET Standard 2.0 and above 'loggerFactory.AddEventSourceLogger() ' Sensitive Dim providers As IEnumerable(Of ILoggerProvider) = Nothing Dim filterOptions1 As LoggerFilterOptions = Nothing Dim filterOptions2 As IOptionsMonitor(Of LoggerFilterOptions) = Nothing Dim factory As LoggerFactory = New LoggerFactory() ' Sensitive factory = New LoggerFactory(providers) ' Sensitive factory = New LoggerFactory(providers, filterOptions1) ' Sensitive factory = New LoggerFactory(providers, filterOptions2) ' Sensitive End Sub End Class End Namespace Log4Net Imports System Imports System.IO Imports System.Xml Imports log4net.Appender Imports log4net.Config Imports log4net.Repository Namespace Logging Class Log4netLogging Private Sub Foo(ByVal repository As ILoggerRepository, ByVal element As XmlElement, ByVal configFile As FileInfo, ByVal configUri As Uri, ByVal configStream As Stream, ByVal appender As IAppender, ParamArray appenders As IAppender()) log4net.Config.XmlConfigurator.Configure(repository) ' Sensitive log4net.Config.XmlConfigurator.Configure(repository, element) ' Sensitive log4net.Config.XmlConfigurator.Configure(repository, configFile) ' Sensitive log4net.Config.XmlConfigurator.Configure(repository, configUri) ' Sensitive log4net.Config.XmlConfigurator.Configure(repository, configStream) ' Sensitive log4net.Config.XmlConfigurator.ConfigureAndWatch(repository, configFile) ' Sensitive log4net.Config.DOMConfigurator.Configure() ' Sensitive log4net.Config.DOMConfigurator.Configure(repository) ' Sensitive log4net.Config.DOMConfigurator.Configure(element) ' Sensitive log4net.Config.DOMConfigurator.Configure(repository, element) ' Sensitive log4net.Config.DOMConfigurator.Configure(configFile) ' Sensitive log4net.Config.DOMConfigurator.Configure(repository, configFile) ' Sensitive log4net.Config.DOMConfigurator.Configure(configStream) ' Sensitive log4net.Config.DOMConfigurator.Configure(repository, configStream) ' Sensitive log4net.Config.DOMConfigurator.ConfigureAndWatch(configFile) ' Sensitive log4net.Config.DOMConfigurator.ConfigureAndWatch(repository, configFile) ' Sensitive log4net.Config.BasicConfigurator.Configure() ' Sensitive log4net.Config.BasicConfigurator.Configure(appender) ' Sensitive log4net.Config.BasicConfigurator.Configure(appenders) ' Sensitive log4net.Config.BasicConfigurator.Configure(repository) ' Sensitive log4net.Config.BasicConfigurator.Configure(repository, appender) ' Sensitive log4net.Config.BasicConfigurator.Configure(repository, appenders) ' Sensitive End Sub End Class End Namespace NLog: configure programmatically Namespace Logging Class NLogLogging Private Sub Foo(ByVal config As NLog.Config.LoggingConfiguration) NLog.LogManager.Configuration = config ' Sensitive End Sub End Class End Namespace Serilog Namespace Logging Class SerilogLogging Private Sub Foo() Dim config As Serilog.LoggerConfiguration = New Serilog.LoggerConfiguration() ' Sensitive End Sub End Class End Namespace See
|
||||||||||||
vbnet:S2077 |
Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExamplePublic Sub SqlCommands(ByVal connection As SqlConnection, ByVal query As String, ByVal param As String) Dim sensitiveQuery As String = String.Concat(query, param) command = New SqlCommand(sensitiveQuery) ' Sensitive command.CommandText = sensitiveQuery ' Sensitive Dim adapter As SqlDataAdapter adapter = New SqlDataAdapter(sensitiveQuery, connection) ' Sensitive End Sub Public Sub Foo(ByVal context As DbContext, ByVal query As String, ByVal param As String) Dim sensitiveQuery As String = String.Concat(query, param) context.Database.ExecuteSqlCommand(sensitiveQuery) ' Sensitive context.Query(Of User)().FromSql(sensitiveQuery) ' Sensitive End Sub Compliant SolutionPublic Sub Foo(ByVal context As DbContext, ByVal value As String) context.Database.ExecuteSqlCommand("SELECT * FROM mytable WHERE mycol=@p0", value) ' Compliant, the query is parameterized End Sub See
|
||||||||||||
vbnet:S5693 |
Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to customize the rule with the limit values that correspond to the web application. Sensitive Code ExampleImports Microsoft.AspNetCore.Mvc Public Class MyController Inherits Controller <HttpPost> <DisableRequestSizeLimit> ' Sensitive: No size limit <RequestSizeLimit(10485760)> ' Sensitive: 10485760 B = 10240 KB = 10 MB is more than the recommended limit of 8MB Public Function PostRequest(Model model) As IActionResult ' ... End Function <HttpPost> <RequestFormLimits(MultipartBodyLengthLimit = 10485760)> ' Sensitive: 10485760 B = 10240 KB = 10 MB is more than the recommended limit of 8MB Public Function MultipartFormRequest(Model model) As IActionResult ' ... End Function End Class Compliant SolutionImports Microsoft.AspNetCore.Mvc Public Class MyController Inherits Controller <HttpPost> <RequestSizeLimit(8388608)> ' Compliant: 8388608 B = 8192 KB = 8 MB Public Function PostRequest(Model model) As IActionResult ' ... End Function <HttpPost> <RequestFormLimits(MultipartBodyLengthLimit = 8388608)> ' Compliant: 8388608 B = 8192 KB = 8 MB Public Function MultipartFormRequest(Model model) AS IActionResult ' ... End Function End Class See
|
||||||||||||
vbnet:S5443 |
Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like
In the past, it has led to the following vulnerabilities: This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesOut of the box, .NET is missing secure-by-design APIs to create temporary files. To overcome this, one of the following options can be used:
Sensitive Code ExampleUsing Writer As New StreamWriter("/tmp/f") ' Sensitive ' ... End Using Dim Tmp As String = Environment.GetEnvironmentVariable("TMP") ' Sensitive Compliant SolutionDim RandomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName()) ' Creates a new file with write, non inheritable permissions which is deleted on close. Using FileStream As New FileStream(RandomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose) Using Writer As New StreamWriter(FileStream) ' Sensitive ' ... End Using End Using See
|
||||||||||||
vbnet:S5445 |
Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic. Why is this an issue?Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it. In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues. What is the potential impact?Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it. Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise. Information disclosureBecause attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive. For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements. Attack surface extensionAn application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise. For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over. How to fix itCode examplesThe following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function. Noncompliant code exampleImports System.IO Sub Example() Dim TempPath = Path.GetTempFileName() 'Noncompliant Using Writer As New StreamWriter(TempPath) Writer.WriteLine("content") End Using End Sub Compliant solutionImports System.IO Sub Example() Dim RandomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName()) Using FileStream As New FileStream(RandomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose) Using Writer As New StreamWriter(FileStream) Writer.WriteLine("content") End Using End Using End Sub How does this work?Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks. Strong security controlsTemporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose. In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:
Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them. Here the example compliant code uses the ResourcesDocumentation
Standards |
||||||||||||
vbnet:S2053 |
This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes. Why is this an issue?During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords. However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital. What is the potential impact?Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need. Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster. If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once. A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before. With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred. ExceptionsTo securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:
When they are used for password storage, using a secure, random salt is required. However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted. How to fix it in .NETCode examplesThe following code contains examples of hard-coded salts. Noncompliant code exampleImports System.Security.Cryptography Public Sub Hash(Password As String) Dim Salt As Byte() = Encoding.UTF8.GetBytes("salty") Dim Hashed As New Rfc2898DeriveBytes(Password, Salt) ' Noncompliant End Sub Compliant solutionImports System.Security.Cryptography Public Sub Hash(Password As String) Dim Hashed As New Rfc2898DeriveBytes(Password, 64) End Sub How does this work?This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards. In the case of the code sample, the class automatically takes care of generating a secure salt if none is specified. ResourcesStandards |
||||||||||||
vbnet:S2612 |
In Unix, "others" class refers to all users except the owner of the file and the members of the group assigned to this file. In Windows, "Everyone" group is similar and includes all members of the Authenticated Users group as well as the built-in Guest account, and several other built-in security accounts. Granting permissions to these groups can lead to unintended access to files. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesThe most restrictive possible permissions should be assigned to files and directories. Sensitive Code Example.Net Framework: Dim unsafeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Allow) Dim fileSecurity = File.GetAccessControl("path") fileSecurity.AddAccessRule(unsafeAccessRule) ' Sensitive fileSecurity.SetAccessRule(unsafeAccessRule) ' Sensitive File.SetAccessControl("fileName", fileSecurity) .Net / .Net Core Dim fileInfo = new FileInfo("path") Dim fileSecurity = fileInfo.GetAccessControl() fileSecurity.AddAccessRule(new FileSystemAccessRule("Everyone", FileSystemRights.Write, AccessControlType.Allow)) ' Sensitive fileInfo.SetAccessControl(fileSecurity) .Net / .Net Core using Mono.Posix.NETStandard Dim fileSystemEntry = UnixFileSystemInfo.GetFileSystemEntry("path") fileSystemEntry.FileAccessPermissions = FileAccessPermissions.OtherReadWriteExecute ' Sensitive Compliant Solution.Net Framework Dim safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny) Dim fileSecurity = File.GetAccessControl("path") fileSecurity.AddAccessRule(safeAccessRule) File.SetAccessControl("path", fileSecurity) .Net / .Net Core Dim safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny) Dim fileInfo = new FileInfo("path") Dim fileSecurity = fileInfo.GetAccessControl() fileSecurity.SetAccessRule(safeAccessRule) fileInfo.SetAccessControl(fileSecurity) .Net / .Net Core using Mono.Posix.NETStandard Dim fs = UnixFileSystemInfo.GetFileSystemEntry("path") fs.FileAccessPermissions = FileAccessPermissions.UserExecute See
|
||||||||||||
vbnet:S3884 |
This rule is deprecated, and will eventually be removed. Why is this an issue?
Specifically, these methods are meant to be called from non-managed code such as a C++ wrapper that then invokes the managed, i.e. C# or VB.NET, code. Noncompliant code examplePublic Class Noncompliant <DllImport("ole32.dll")> Public Shared Function CoSetProxyBlanket(<MarshalAs(UnmanagedType.IUnknown)>pProxy As Object, dwAuthnSvc as UInt32, dwAuthzSvc As UInt32, <MarshalAs(UnmanagedType.LPWStr)> pServerPrincName As String, dwAuthnLevel As UInt32, dwImpLevel As UInt32, pAuthInfo As IntPtr, dwCapabilities As UInt32) As Integer End Function Public Enum RpcAuthnLevel [Default] = 0 None = 1 Connect = 2 [Call] = 3 Pkt = 4 PktIntegrity = 5 PktPrivacy = 6 End Enum Public Enum RpcImpLevel [Default] = 0 Anonymous = 1 Identify = 2 Impersonate = 3 [Delegate] = 4 End Enum Public Enum EoAuthnCap None = &H00 MutualAuth = &H01 StaticCloaking = &H20 DynamicCloaking = &H40 AnyAuthority = &H80 MakeFullSIC = &H100 [Default] = &H800 SecureRefs = &H02 AccessControl = &H04 AppID = &H08 Dynamic = &H10 RequireFullSIC = &H200 AutoImpersonate = &H400 NoCustomMarshal = &H2000 DisableAAA = &H1000 End Enum <DllImport("ole32.dll")> Public Shared Function CoInitializeSecurity(pVoid As IntPtr, cAuthSvc As Integer, asAuthSvc As IntPtr, pReserved1 As IntPtr, level As RpcAuthnLevel, impers As RpcImpLevel, pAuthList As IntPtr, dwCapabilities As EoAuthnCap, pReserved3 As IntPtr) As Integer End Function Public Sub DoSomething() Dim Hres1 As Integer = CoSetProxyBlanket(Nothing, 0, 0, Nothing, 0, 0, IntPtr.Zero, 0) ' Noncompliant Dim Hres2 As Integer = CoInitializeSecurity(IntPtr.Zero, -1, IntPtr.Zero, IntPtr.Zero, RpcAuthnLevel.None, RpcImpLevel.Impersonate, IntPtr.Zero, EoAuthnCap.None, IntPtr.Zero) ' Noncompliant End Sub End Class Resources |
||||||||||||
vbnet:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code ExampleDim ip = "192.168.12.42" ' Sensitive Dim address = IPAddress.Parse(ip) Compliant SolutionDim ip = ConfigurationManager.AppSettings("myapplication.ip") ' Compliant Dim address = IPAddress.Parse(ip) ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See |
||||||||||||
vbnet:S4830 |
This vulnerability makes it possible that an encrypted communication is intercepted. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be. When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. Loss of data integrityWhen TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system. How to fix it in .NETCode examplesIn the following example, the callback change impacts the entirety of HTTP requests made by the application. The certificate validation gets disabled by overriding Noncompliant code exampleImports System.Net Public Sub Send() ServicePointManager.ServerCertificateValidationCallback = Function(sender, certificate, chain, errors) True ' Noncompliant Dim request As System.Net.HttpWebRequest = System.Net.HttpWebRequest.Create(New System.Uri("https://example.com")) request.Method = System.Net.WebRequestMethods.Http.Get Dim response As System.Net.HttpWebResponse = request.GetResponse() response.Close() End Sub How does this work?Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation. To avoid running into problems with invalid certificates, consider the following sections. Using trusted certificatesIf possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration. Working with self-signed certificates or non-standard CAsIn some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store. ResourcesStandards
|
||||||||||||
vbnet:S6444 |
Not specifying a timeout for regular expressions can lead to a Denial-of-Service attack. Pass a timeout when using
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExamplePublic Sub RegexPattern(Input As String) Dim EmailPattern As New Regex(".+@.+", RegexOptions.None) Dim IsNumber as Boolean = Regex.IsMatch(Input, "[0-9]+") Dim IsLetterA as Boolean = Regex.IsMatch(Input, "(a+)+") End Sub Compliant SolutionPublic Sub RegexPattern(Input As String) Dim EmailPattern As New Regex(".+@.+", RegexOptions.None, TimeSpan.FromMilliseconds(100)) Dim IsNumber as Boolean = Regex.IsMatch(Input, "[0-9]+", RegexOptions.None, TimeSpan.FromMilliseconds(100)) Dim IsLetterA As Boolean = Regex.IsMatch(Input, "(a+)+", RegexOptions.NonBacktracking) '.Net 7 And above AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT", TimeSpan.FromMilliseconds(100)) 'process-wide setting End Sub See
|
||||||||||||
vbnet:S4036 |
When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesFully qualified/absolute path should be used to specify the OS command to execute. Sensitive Code ExampleDim p As New Process() p.StartInfo.FileName = "binary" ' Sensitive Compliant SolutionDim p As New Process() p.StartInfo.FileName = "C:\Apps\binary.exe" ' Compliant See |
||||||||||||
text:S6389 |
Using bidirectional (BIDI) characters can lead to incomprehensible code. The Unicode encoding contains BIDI control characters that are used to display text right-to-left (RTL) instead of left-to-right (LTR). This is necessary for certain languages that use RTL text. The BIDI characters can be used to create a difference in the code between what a human sees and what a compiler or interpreter sees. An advisary might use this feature to hide a backdoor in the code that will not be spotted by a human reviewer as it is not visible. This can lead to supply chain attacks since the backdoored code might persist over a long time without being detected and can even be included in other projects, for example in the case of libraries. Ask Yourself Whether
There is a risk if you answered no to any of these questions. Recommended Secure Coding PracticesOpen the file in an editor that reveals non-ASCII characters and remove all BIDI control characters that are not intended. If hidden characters are illegitimate, this issue could indicate a potential ongoing attack on the code. Therefore, it would be best to warn your organization’s security team about this issue. Required opening BIDI characters should be explicitly closed with the PDI character. Sensitive Code ExampleA hidden BIDI character is present in front of def subtract_funds(account: str, amount: int): ''' Subtract funds from bank account then ''' ;return bank[account] -= amount return The executed code looks like the following: def subtract_funds(account: str, amount: int): ''' Subtract funds from bank account then <RLI>''' ;return bank[account] -= amount return Compliant SolutionNo hidden BIDI characters are present: def subtract_funds(account: str, amount: int): ''' Subtract funds from bank account then return; ''' bank[account] -= amount return See
|
||||||||||||
typescript:S5732 |
Clickjacking attacks occur when an attacker try to trick an user to click on certain buttons/links of a legit website. This attack can take place with malicious HTML frames well hidden in an attacker website. For instance, suppose a safe and authentic page of a social network (https://socialnetworkexample.com/makemyprofilpublic) which allows an user to change the visibility of his profile by clicking on a button. This is a critical feature with high privacy concerns. Users are generally well informed on the social network of the consequences of this action. An attacker can trick users, without their consent, to do this action with the below embedded code added on a malicious website: <html> <b>Click on the button below to win 5000$</b> <br> <iframe src="https://socialnetworkexample.com/makemyprofilpublic" width="200" height="200"></iframe> </html> Playing with the size of the iframe it’s sometimes possible to display only the critical parts of a page, in this case the button of the makemyprofilpublic page. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement content security policy frame-ancestors directive which is supported by all modern browsers and will specify the origins of frame allowed to be loaded by the browser (this directive deprecates X-Frame-Options). Sensitive Code ExampleIn Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.contentSecurityPolicy({ directives: { // other directives frameAncestors: ["'none'"] // Sensitive: frameAncestors is set to none } }) ); Compliant SolutionIn Express.js application a standard way to implement CSP frame-ancestors directive is the helmet-csp or helmet middleware: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.contentSecurityPolicy({ directives: { // other directives frameAncestors: ["'example.com'"] // Compliant } }) ); See
|
||||||||||||
typescript:S5730 |
A mixed-content is when a resource is loaded with the HTTP protocol, from a website accessed with the HTTPs protocol, thus mixed-content are not encrypted and exposed to MITM attacks and could break the entire level of protection that was desired by implementing encryption with the HTTPs protocol. The main threat with mixed-content is not only the confidentiality of resources but the whole website integrity:
Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement content security policy block-all-mixed-content directive which is supported by all modern browsers and will block loading of mixed-contents. Sensitive Code ExampleIn Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.contentSecurityPolicy({ directives: { "default-src": ["'self'", 'example.com', 'code.jquery.com'] } // Sensitive: blockAllMixedContent directive is missing }) ); Compliant SolutionIn Express.js application a standard way to block mixed-content is to put in place the helmet-csp or helmet middleware with the
const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.contentSecurityPolicy({ directives: { "default-src": ["'self'", 'example.com', 'code.jquery.com'], blockAllMixedContent: [] // Compliant } }) ); See
|
||||||||||||
typescript:S5734 |
MIME confusion attacks occur when an attacker successfully tricks a web-browser to interpret a resource as a different type than the one expected. To correctly interpret a resource (script, image, stylesheet …) web browsers look for the Content-Type header defined in the HTTP response received from the server, but often this header is not set or is set with an incorrect value. To avoid content-type mismatch and to provide the best user experience, web browsers try to deduce the right content-type, generally by inspecting the content of the resources (the first bytes). This "guess mechanism" is called MIME type sniffing. Attackers can take advantage of this feature when a website ("example.com" here) allows to upload arbitrary files. In that case, an attacker can upload a malicious image fakeimage.png (containing malicious JavaScript code or a polyglot content file) such as: <script>alert(document.cookie)</script> When the victim will visit the website showing the uploaded image, the malicious script embedded into the image will be executed by web browsers performing MIME type sniffing. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesImplement X-Content-Type-Options header with nosniff value (the only existing value for this header) which is supported by all modern browsers and will prevent browsers from performing MIME type sniffing, so that in case of Content-Type header mismatch, the resource is not interpreted. For example within a <script> object context, JavaScript MIME types are expected (like application/javascript) in the Content-Type header. Sensitive Code ExampleIn Express.js application the code is sensitive if, when using helmet, the const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet({ noSniff: false, // Sensitive }) ); Compliant SolutionWhen using const express = require('express'); const helmet= require('helmet'); let app = express(); app.use(helmet.noSniff()); See
|
||||||||||||
typescript:S5736 |
HTTP header referer contains a URL set by web browsers and used by applications to track from where the user came from, it’s for instance a relevant value for web analytic services, but it can cause serious privacy and security problems if the URL contains confidential information. Note that Firefox for instance, to prevent data leaks, removes path information in the Referer header while browsing privately. Suppose an e-commerce website asks the user his credit card number to purchase a product: <html> <body> <form action="/valid_order" method="GET"> Type your credit card number to purchase products: <input type=text id="cc" value="1111-2222-3333-4444"> <input type=submit> </form> </body> When submitting the above HTML form, a HTTP GET request will be performed, the URL requested will be https://example.com/valid_order?cc=1111-2222-3333-4444 with credit card number inside and it’s obviously not secure for these reasons:
In addition to these threats, when further requests will be performed from the "valid_order" page with a simple legitimate embedded script like that: <script src="https://webanalyticservices_example.com/track"> The referer header which contains confidential information will be send to a third party web analytic service and cause privacy issue: GET /track HTTP/2.0 Host: webanalyticservices_example.com Referer: https://example.com/valid_order?cc=1111-2222-3333-4444 Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesConfidential information should not be set inside URLs (GET requests) of the application and a safe (ie: different from Sensitive Code ExampleIn Express.js application the code is sensitive if the helmet const express = require('express'); const helmet = require('helmet'); app.use( helmet.referrerPolicy({ policy: 'no-referrer-when-downgrade' // Sensitive: no-referrer-when-downgrade is used }) ); Compliant SolutionIn Express.js application a secure solution is to user the helmet referrer policy middleware set
to const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.referrerPolicy({ policy: 'no-referrer' // Compliant }) ); See
|
||||||||||||
typescript:S5852 |
Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars). This rule determines the runtime complexity of a regular expression and informs you if it is not linear. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesTo avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression. In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen.
In order to rewrite your regular expression without these patterns, consider the following strategies:
Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when the regex is not anchored to the beginning of the string, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:
Sensitive Code ExampleThe regex evaluation will never end: /(a+)+$/.test( "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaa!" ); // Sensitive Compliant SolutionPossessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues. Unfortunately, they are not supported in JavaScript, but one can still mimick them using lookahead assertions and backreferences: /((?=(a+))\2)+$/.test( "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaa!" ); // Compliant See
|
||||||||||||
typescript:S2598 |
Why is this an issue?If the file upload feature is implemented without proper folder restriction, it will result in an implicit trust violation within the server, as trusted files will be implicitly stored alongside third-party files that should be considered untrusted. This can allow an attacker to disrupt the security of an internal server process or the running application. What is the potential impact?After discovering this vulnerability, attackers may attempt to upload as many different file types as possible, such as javascript files, bash scripts, malware, or malicious configuration files targeting potential processes. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Full application compromiseIn the worst-case scenario, the attackers succeed in uploading a file recognized by in an internal tool, triggering code execution. Depending on the attacker, code execution can be used with different intentions:
Server Resource ExhaustionBy repeatedly uploading large files, an attacker can consume excessive server resources, resulting in a denial of service. If the component affected by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service can only affect the attacker who caused it. Even though a denial of service might have little direct impact, it can have secondary impact in architectures that use containers and container orchestrators. For example, it can cause unexpected container failures or overuse of resources. In some cases, it is also possible to force the product to "fail open" when resources are exhausted, which means that some security features are disabled in an emergency. These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP). How to fix it in FormidableCode examplesNoncompliant code exampleconst Formidable = require('formidable'); const form = new Formidable(); // Noncompliant form.uploadDir = "/tmp/"; form.keepExtensions = true; Compliant solutionconst Formidable = require('formidable'); const form = new Formidable(); form.uploadDir = "/uploads/"; form.keepExtensions = false; How does this work?Use pre-approved foldersCreate a special folder where untrusted data should be stored. This folder should be classified as untrusted and have the following characteristics:
This folder should not be located in Also, the original file names and extensions should be changed to controlled strings to prevent unwanted code from being executed based on the file names. Resources
|
||||||||||||
typescript:S5739 |
When implementing the HTTPS protocol, the website mostly continue to support the HTTP protocol to redirect users to HTTPS when they request a HTTP version of the website. These redirects are not encrypted and are therefore vulnerable to man in the middle attacks. The Strict-Transport-Security policy header (HSTS) set by an application instructs the web browser to convert any HTTP request to HTTPS. Web browsers that see the Strict-Transport-Security policy header for the first time record information specified in the header:
With the Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement Strict-Transport-Security policy header, it is recommended to apply this policy to all subdomains ( Sensitive Code ExampleIn Express.js application the code is sensitive if the helmet or hsts middleware are disabled or used without recommended values: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use(helmet.hsts({ maxAge: 3153600, // Sensitive, recommended >= 15552000 includeSubDomains: false // Sensitive, recommended 'true' })); Compliant SolutionIn Express.js application a standard way to implement HSTS is with the helmet or hsts middleware: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use(helmet.hsts({ maxAge: 31536000, includeSubDomains: true })); // Compliant See
|
||||||||||||
typescript:S5742 |
Certificate Transparency (CT) is an open-framework to protect against identity theft when certificates are issued. Certificate Authorities (CA) electronically sign certificate after verifying the identify of the certificate owner. Attackers use, among other things, social engineering attacks to trick a CA to correctly verifying a spoofed identity/forged certificate. CAs implement Certificate Transparency framework to publicly log the records of newly issued certificates, allowing the public and in particular the identity owner to monitor these logs to verify that his identify was not usurped. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement Expect-CT HTTP header which instructs the web browser to check public CT logs in order to verify if the website appears inside and if it is not, the browser will block the request and display a warning to the user. Sensitive Code ExampleIn Express.js application the code is sensitive if the expect-ct middleware is disabled: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet({ expectCt: false // Sensitive }) ); Compliant SolutionIn Express.js application the expect-ct middleware is the standard way to implement
expect-ct. Usually, the deployment of this policy starts with the report only mode ( const express = require('express'); const helmet = require('helmet'); let app = express(); app.use(helmet.expectCt({ enforce: true, maxAge: 86400 })); // Compliant See
|
||||||||||||
typescript:S5743 |
This rule is deprecated, and will eventually be removed. By default, web browsers perform DNS prefetching to reduce latency due to DNS resolutions required when an user clicks links from a website page. For instance on example.com the hyperlink below contains a cross-origin domain name that must be resolved to an IP address by the web browser: <a href="https://otherexample.com">go on our partner website</a> It can add significant latency during requests, especially if the page contains many links to cross-origin domains. DNS prefetch allows web browsers to perform DNS resolving in the background before the user clicks a link. This feature can cause privacy issues because DNS resolving from the user’s computer is performed without his consent if he doesn’t intent to go to the linked website. On a complex private webpage, a combination "of unique links/DNS resolutions" can indicate, to a eavesdropper for instance, that the user is visiting the private page. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement X-DNS-Prefetch-Control header with an off value but this could significantly degrade website performances. Sensitive Code ExampleIn Express.js application the code is sensitive if the dns-prefetch-control middleware is disabled or used without the recommended value: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.dnsPrefetchControl({ allow: true // Sensitive: allowing DNS prefetching is security-sensitive }) ); Compliant SolutionIn Express.js application the dns-prefetch-control or helmet middleware is the standard way to implement const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet.dnsPrefetchControl({ allow: false // Compliant }) ); See
|
||||||||||||
typescript:S4502 |
A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application. The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleExpress.js CSURF middleware protection is not found on an unsafe HTTP method like POST method: let csrf = require('csurf'); let express = require('express'); let csrfProtection = csrf({ cookie: true }); let app = express(); // Sensitive: this operation doesn't look like protected by CSURF middleware (csrfProtection is not used) app.post('/money_transfer', parseForm, function (req, res) { res.send('Money transferred'); }); Protection provided by Express.js CSURF middleware is globally disabled on unsafe methods: let csrf = require('csurf'); let express = require('express'); app.use(csrf({ cookie: true, ignoreMethods: ["POST", "GET"] })); // Sensitive as POST is unsafe method Compliant SolutionExpress.js CSURF middleware protection is used on unsafe methods: let csrf = require('csurf'); let express = require('express'); let csrfProtection = csrf({ cookie: true }); let app = express(); app.post('/money_transfer', parseForm, csrfProtection, function (req, res) { // Compliant res.send('Money transferred') }); Protection provided by Express.js CSURF middleware is enabled on unsafe methods: let csrf = require('csurf'); let express = require('express'); app.use(csrf({ cookie: true, ignoreMethods: ["GET"] })); // Compliant See |
||||||||||||
typescript:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on production servers or applications distributed to end users. Sensitive Code Exampleerrorhandler Express.js middleware should not be used in production: const express = require('express'); const errorhandler = require('errorhandler'); let app = express(); app.use(errorhandler()); // Sensitive Compliant Solutionerrorhandler Express.js middleware used only in development mode: const express = require('express'); const errorhandler = require('errorhandler'); let app = express(); if (process.env.NODE_ENV === 'development') { app.use(errorhandler()); } See |
||||||||||||
typescript:S5604 |
Powerful features are browser features (geolocation, camera, microphone …) that can be accessed with JavaScript API and may require a permission granted by the user. These features can have a high impact on privacy and user security thus they should only be used if they are really necessary to implement the critical parts of an application. This rule highlights intrusive permissions when requested with the future standard (but currently experimental) web browser query API and specific APIs related to the permission. It is highly recommended to customize this rule with the permissions considered as intrusive in the context of the web application. Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleWhen using geolocation API, Firefox for example retrieves personal information like nearby wireless access points and IP address and sends it to the default geolocation service provider, Google Location Services: navigator.permissions.query({name:"geolocation"}).then(function(result) { }); // Sensitive: geolocation is a powerful feature with high privacy concerns navigator.geolocation.getCurrentPosition(function(position) { console.log("coordinates x="+position.coords.latitude+" and y="+position.coords.longitude); }); // Sensitive: geolocation is a powerful feature with high privacy concerns Compliant SolutionIf geolocation is required, always explain to the user why the application needs it and prefer requesting an approximate location when possible: <html> <head> <title> Retailer website example </title> </head> <body> Type a city, street or zip code where you want to retrieve the closest retail locations of our products: <form method=post> <input type=text value="New York"> <!-- Compliant --> </form> </body> </html> See
|
||||||||||||
typescript:S5725 |
Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application. On the client side, where front-end code is executed, malicious code could:
Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:
By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes
applied to it before it is downloaded. Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesTo check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed. In this case, the artifact’s hash must:
To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings. Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes. Sensitive Code ExampleThe following code sample uses neither integrity checks nor version pinning: let script = document.createElement("script"); script.src = "https://cdn.example.com/latest/script.js"; // Sensitive script.crossOrigin = "anonymous"; document.head.appendChild(script); Compliant Solutionlet script = document.createElement("script"); script.src = "https://cdn.example.com/v5.3.6/script.js"; script.integrity = "sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"; script.crossOrigin = "anonymous"; document.head.appendChild(script); See |
||||||||||||
typescript:S5728 |
Content security policy (CSP) (fetch directives) is a W3C standard which is used by a server to specify, via a http header, the origins from where the browser is allowed to load resources. It can help to mitigate the risk of cross site scripting (XSS) attacks and reduce privileges used by an application. If the website doesn’t define CSP header the browser will apply same-origin policy by default. Content-Security-Policy: default-src 'self'; script-src ‘self ‘ http://www.example.com In the above example, all resources are allowed from the website where this header is set and script resources fetched from example.com are also authorized: <img src="selfhostedimage.png></script> <!-- will be loaded because default-src 'self'; directive is applied --> <img src="http://www.example.com/image.png></script> <!-- will NOT be loaded because default-src 'self'; directive is applied --> <script src="http://www.example.com/library.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.comdirective is applied --> <script src="selfhostedscript.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.com directive is applied --> <script src="http://www.otherexample.com/library.js></script> <!-- will NOT be loaded because script-src ‘self ‘ http://www.example.comdirective is applied --> Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesImplement content security policy fetch directives, in particular default-src directive and continue to properly sanitize and validate all inputs of the application, indeed CSP fetch directives is only a tool to reduce the impact of cross site scripting attacks. Sensitive Code ExampleIn a Express.js application, the code is sensitive if the helmet contentSecurityPolicy middleware is disabled: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use( helmet({ contentSecurityPolicy: false, // sensitive }) ); Compliant SolutionIn a Express.js application, a standard way to implement CSP is the helmet contentSecurityPolicy middleware: const express = require('express'); const helmet = require('helmet'); let app = express(); app.use(helmet.contentSecurityPolicy()); // Compliant See
|
||||||||||||
typescript:S5542 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext. Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution. For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in Node.jsCode examplesNoncompliant code exampleExample with a symmetric cipher, AES: const crypto = require('crypto'); crypto.createCipheriv("AES-128-CBC", key, iv); // Noncompliant Compliant solutionExample with a symmetric cipher, AES: const crypto = require('crypto'); crypto.createCipheriv("AES-256-GCM", key, iv); How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. Appropriate choices are currently the following. For AES: use authenticated encryption modesThe best-known authenticated encryption mode for AES is Galois/Counter mode (GCM). GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data. Other similar modes are:
It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead. For RSA: use the OAEP schemeThe Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA. ResourcesArticles & blog posts
Standards |
||||||||||||
typescript:S5547 |
This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in Node.jsCode examplesThe following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided. Noncompliant code exampleconst crypto = require('crypto'); crypto.createCipheriv("DES", key, iv); // Noncompliant Compliant solutionconst crypto = require('crypto'); crypto.createCipheriv("AES-256-GCM", key, iv); How does this work?Use a secure algorithmIt is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES). For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits. ResourcesStandards |
||||||||||||
typescript:S5659 |
This vulnerability allows forging of JSON Web Tokens to impersonate other users. Why is this an issue?JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature. What is the potential impact?When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities. Impersonation of usersJWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data. Unauthorized data accessWhen a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access. How to fix it in jsonwebtokenCode examplesThe following code contains examples of JWT encoding and decoding without a strong cipher algorithm. Noncompliant code exampleconst jwt = require('jsonwebtoken'); jwt.sign(payload, key, { algorithm: 'none' }); // Noncompliant const jwt = require('jsonwebtoken'); jwt.verify(token, key, { expiresIn: 360000, algorithms: ['none'] // Noncompliant }, callbackcheck); Compliant solutionconst jwt = require('jsonwebtoken'); jwt.sign(payload, key, { algorithm: 'HS256' }); const jwt = require('jsonwebtoken'); jwt.verify(token, key, { expiresIn: 360000, algorithms: ['HS256'] }, callbackcheck); How does this work?Always sign your tokensThe foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created. Choose a strong cipher algorithmIt is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens. Verify the signature of your tokensResolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose. Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked. To resolve the issue, follow these instructions:
By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process. Going the extra mileSecurely store your secret keysEnsure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services. Rotate your secret keysEven with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions. ResourcesStandards |
||||||||||||
typescript:S2245 |
Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities: When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information. As the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleconst val = Math.random(); // Sensitive // Check if val is used in a security context. Compliant Solution// === Client side === const crypto = window.crypto || window.msCrypto; var array = new Uint32Array(1); crypto.getRandomValues(array); // Compliant for security-sensitive use cases // === Server side === const crypto = require('crypto'); const buf = crypto.randomBytes(1); // Compliant for security-sensitive use cases See
|
||||||||||||
typescript:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Node.jsCode examplesNoncompliant code exampleNodeJs offers multiple ways to set weak TLS protocols. For https and tls, these options are used and are used in other third-party libraries as well. The first is const https = require('node:https'); const tls = require('node:tls'); let options = { secureProtocol: 'TLSv1_method' // Noncompliant }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); The second is the combination of const https = require('node:https'); const tls = require('node:tls'); let options = { minVersion: 'TLSv1.1', // Noncompliant maxVersion: 'TLSv1.2' }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); And const https = require('node:https'); const tls = require('node:tls'); const constants = require('node:crypto'): let options = { secureOptions: constants.SSL_OP_NO_SSLv2 | constants.SSL_OP_NO_SSLv3 | constants.SSL_OP_NO_TLSv1 }; // Noncompliant let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); Compliant solutionconst https = require('node:https'); const tls = require('node:tls'); let options = { secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); const https = require('node:https'); const tls = require('node:tls'); let options = { minVersion: 'TLSv1.2', maxVersion: 'TLSv1.2' }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); Here, the goal is to turn on only TLSv1.2 and higher, by turning off all lower versions: const https = require('node:https'); const tls = require('node:tls'); let options = { secureOptions: constants.SSL_OP_NO_SSLv2 | constants.SSL_OP_NO_SSLv3 | constants.SSL_OP_NO_TLSv1 | constants.SSL_OP_NO_TLSv1_1 }; let req = https.request(options, (res) => { }); let socket = tls.connect(443, "www.example.com", options, () => { }); How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards
|
||||||||||||
typescript:S4787 |
This rule is deprecated; use S4426, S5542, S5547 instead. Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities: Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption. This rule flags function calls that initiate encryption/decryption. Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example// === Client side === crypto.subtle.encrypt(algo, key, plainData); // Sensitive crypto.subtle.decrypt(algo, key, encData); // Sensitive // === Server side === const crypto = require("crypto"); const cipher = crypto.createCipher(algo, key); // Sensitive const cipheriv = crypto.createCipheriv(algo, key, iv); // Sensitive const decipher = crypto.createDecipher(algo, key); // Sensitive const decipheriv = crypto.createDecipheriv(algo, key, iv); // Sensitive const pubEnc = crypto.publicEncrypt(key, buf); // Sensitive const privDec = crypto.privateDecrypt({ key: key, passphrase: secret }, pubEnc); // Sensitive const privEnc = crypto.privateEncrypt({ key: key, passphrase: secret }, buf); // Sensitive const pubDec = crypto.publicDecrypt(key, privEnc); // Sensitive See
|
||||||||||||
typescript:S5876 |
An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled. Why is this an issue?Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:
What is the potential impact?Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following: ImpersonationOnce an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf. Data BreachIf an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes. Privilege EscalationIn some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems. How to fix it in PassportCode examplesUpon user authentication, it is crucial to regenerate the session identifier to prevent fixation attacks. Passport provides a mechanism to achieve
this by using the Noncompliant code exampleapp.post('/login', passport.authenticate('local', { failureRedirect: '/login' }), function(req, res) { // Noncompliant - no session.regenerate after login res.redirect('/'); }); Compliant solutionapp.post('/login', passport.authenticate('local', { failureRedirect: '/login' }), function(req, res) { let prevSession = req.session; req.session.regenerate((err) => { Object.assign(req.session, prevSession); res.redirect('/'); }); }); How does this work?The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process. Here’s how session fixation protection typically works:
By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process. ResourcesDocumentation
Articles & blog postsStandards |
||||||||||||
typescript:S3330 |
When a cookie is configured with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplecookie-session module: let session = cookieSession({ httpOnly: false,// Sensitive }); // Sensitive express-session module: const express = require('express'), const session = require('express-session'), let app = express() app.use(session({ cookie: { httpOnly: false // Sensitive } })), cookies module: let cookies = new Cookies(req, res, { keys: keys }); cookies.set('LastVisit', new Date().toISOString(), { httpOnly: false // Sensitive }); // Sensitive csurf module: const cookieParser = require('cookie-parser'); const csrf = require('csurf'); const express = require('express'); let csrfProtection = csrf({ cookie: { httpOnly: false }}); // Sensitive Compliant Solutioncookie-session module: let session = cookieSession({ httpOnly: true,// Compliant }); // Compliant express-session module: const express = require('express'); const session = require('express-session'); let app = express(); app.use(session({ cookie: { httpOnly: true // Compliant } })); cookies module: let cookies = new Cookies(req, res, { keys: keys }); cookies.set('LastVisit', new Date().toISOString(), { httpOnly: true // Compliant }); // Compliant csurf module: const cookieParser = require('cookie-parser'); const csrf = require('csurf'); const express = require('express'); let csrfProtection = csrf({ cookie: { httpOnly: true }}); // Compliant See
|
||||||||||||
typescript:S4426 |
This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms. Note that depending on the algorithm, the term key refers to a different mathematical property. For example:
If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext. In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Node.jsCode examplesThe following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm. Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm. Noncompliant code exampleHere is an example of a private key generation with RSA: const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', { modulusLength: 1024, // Noncompliant publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Here is an example of a key generation with the Digital Signature Algorithm (DSA): const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', { modulusLength: 1024, // Noncompliant publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name: const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPair('ec', { namedCurve: 'secp112r2', // Noncompliant publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Compliant solutionHere is an example of a private key generation with RSA: const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', { modulusLength: 2048, publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Here is an example of a key generation with the Digital Signature Algorithm (DSA): const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', { modulusLength: 2048, publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name: const crypto = require('crypto'); function callback(err, pub, priv) {} var { privateKey, publicKey } = crypto.generateKeyPair('ec', { namedCurve: 'secp224k1', publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem' } }, callback); How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community. The appropriate choices are the following. RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem. In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible. AES (Advanced Encryption Standard)AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying
all possible keys. Currently, a minimum key size of 128 bits is recommended for AES. Elliptic Curve Cryptography (ECC)Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve
algorithms is mentioned directly in their names. For example, Currently, a minimum key size of 224 bits is recommended for EC-based algorithms. Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:
Going the extra milePre-Quantum CryptographyEncrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer. Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety. Resources
Articles & blog posts
Standards
|
||||||||||||
typescript:S4784 |
This rule is deprecated; use S5852 instead. Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities: Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as
Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users. This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following
characters: Example: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesCheck whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using. Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2. Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection. Sensitive Code Exampleconst regex = /(a+)+b/; // Sensitive const regex2 = new RegExp("(a+)+b"); // Sensitive str.search("(a+)+b"); // Sensitive str.match("(a+)+b"); // Sensitive str.split("(a+)+b"); // Sensitive Note: String.matchAll does not raise any issue as it is not supported by NodeJS. ExceptionsSome corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: It is a good idea to test your regular expression if it has the same pattern on both side of a " See
|
||||||||||||
typescript:S5757 |
Log management is an important topic, especially for the security of a web application, to ensure user activity, including potential attackers, is recorded and available for an analyst to understand what’s happened on the web application in case of malicious activities. Retention of specific logs for a defined period of time is often necessary to comply with regulations such as GDPR, PCI DSS and others. However, to protect user’s privacy, certain informations are forbidden or strongly discouraged from being logged, such as user passwords or credit card numbers, which obviously should not be stored or at least not in clear text. Ask Yourself WhetherIn a production environment:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesLoggers should be configured with a list of confidential, personal information that will be hidden/masked or removed from logs. Sensitive Code ExampleWith Signale log management framework the code is sensitive when an empty list of secrets is defined: const { Signale } = require('signale'); const CREDIT_CARD_NUMBERS = fetchFromWebForm() // here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance const options = { secrets: [] // empty list of secrets }; const logger = new Signale(options); // Sensitive CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) { logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER); }); Compliant SolutionWith Signale log management framework it is possible to define a list of secrets that will be hidden in logs: const { Signale } = require('signale'); const CREDIT_CARD_NUMBERS = fetchFromWebForm() // here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance const options = { secrets: ["([0-9]{4}-?)+"] }; const logger = new Signale(options); // Compliant CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) { logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER); }); See |
||||||||||||
typescript:S5759 |
Users often connect to web servers through HTTP proxies. Proxy can be configured to forward the client IP address via the IP address is a personal information which can identify a single user and thus impact his privacy. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesUser IP address should not be forwarded unless the application needs it, as part of an authentication, authorization scheme or log management for examples. Sensitive Code Examplevar httpProxy = require('http-proxy'); httpProxy.createProxyServer({target:'http://localhost:9000', xfwd:true}) // Noncompliant .listen(8000); var express = require('express'); const { createProxyMiddleware } = require('http-proxy-middleware'); const app = express(); app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true, xfwd: true })); // Noncompliant app.listen(3000); Compliant Solutionvar httpProxy = require('http-proxy'); // By default xfwd option is false httpProxy.createProxyServer({target:'http://localhost:9000'}) // Compliant .listen(8000); var express = require('express'); const { createProxyMiddleware } = require('http-proxy-middleware'); const app = express(); // By default xfwd option is false app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true})); // Compliant app.listen(3000); See
|
||||||||||||
typescript:S2255 |
This rule is deprecated, and will eventually be removed. Using cookies is security-sensitive. It has led in the past to the following vulnerabilities: Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed. This rule flags code that writes cookies. Ask Yourself Whether
You are at risk if you answered yes to this question. Recommended Secure Coding PracticesCookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session. Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed. Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies. As a side note, every information read from a cookie should be Sanitized. Sensitive Code Example// === Built-in NodeJS modules === const http = require('http'); const https = require('https'); http.createServer(function(req, res) { res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive }); https.createServer(function(req, res) { res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive }); // === ExpressJS === const express = require('express'); const app = express(); app.use(function(req, res, next) { res.cookie('name', 'John'); // Sensitive }); // === In browser === // Set cookie document.cookie = "name=John"; // Sensitive See
|
||||||||||||
typescript:S4790 |
Cryptographic hash algorithms such as Ask Yourself WhetherThe hashed value is used in a security context like:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSafer alternatives, such as Sensitive Code Exampleconst crypto = require("crypto"); const hash = crypto.createHash('sha1'); // Sensitive Compliant Solutionconst crypto = require("crypto"); const hash = crypto.createHash('sha512'); // Compliant See
|
||||||||||||
typescript:S5527 |
This vulnerability allows attackers to impersonate a trusted host. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security. When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. To do so, an attacker would obtain a valid certificate authenticating What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. How to fix it in Node.jsCode examplesThe following code contains examples of disabled hostname validation. The hostname validation gets disabled by overriding Noncompliant code exampleconst https = require('node:https'); let options = { hostname: 'www.example.com', port: 443, path: '/', method: 'GET', checkServerIdentity: function() {}, // Noncompliant secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { res.on('data', (d) => { process.stdout.write(d); }); }); const tls = require('node:tls'); let options = { checkServerIdentity: function() {}, // Noncompliant secureProtocol: 'TLSv1_2_method' }; let socket = tls.connect(443, "www.example.com", options, () => { process.stdin.pipe(socket); process.stdin.resume(); }); Compliant solutionconst https = require('node:https'); let options = { hostname: 'www.example.com', port: 443, path: '/', method: 'GET', secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { res.on('data', (d) => { process.stdout.write(d); }); }); const tls = require('node:tls'); let options = { secureProtocol: 'TLSv1_2_method' }; let socket = tls.connect(443, "www.example.com", options, () => { process.stdin.pipe(socket); process.stdin.resume(); }); How does this work?To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate. Use valid certificatesIf a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues. Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself. In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:
ResourcesStandards
|
||||||||||||
typescript:S2755 |
This vulnerability allows the usage of external entities in XML. Why is this an issue?External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack. What is the potential impact?Exposing sensitive dataOne significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information. Exhausting system resourcesAnother consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience. Forging requestsXXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure. How to fix it in libxmljsCode examplesThe following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed. Noncompliant code examplevar libxmljs = require('libxmljs'); var fs = require('fs'); var xml = fs.readFileSync('xxe.xml', 'utf8'); libxmljs.parseXmlString(xml, { noblanks: true, noent: true, // Noncompliant nocdata: true }); Compliant solution
var libxmljs = require('libxmljs'); var fs = require('fs'); var xml = fs.readFileSync('xxe.xml', 'utf8'); libxmljs.parseXmlString(xml); How does this work?Disable external entitiesThe most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework. If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved
during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are
processed. ResourcesStandards |
||||||||||||
typescript:S4817 |
This rule is deprecated, and will eventually be removed. Executing XPATH expressions is security-sensitive. It has led in the past to the following vulnerabilities: User-provided data such as URL parameters should always be considered as untrusted and tainted. Constructing XPath expressions directly from tainted data enables attackers to inject specially crafted values that changes the initial meaning of the expression itself. Successful XPath injections attacks can read sensitive information from the XML document. Ask Yourself Whether
You are at risk if you answered yes to this question. Recommended Secure Coding PracticesSanitize any user input before using it in an XPATH expression. Sensitive Code Example// === Server side === var xpath = require('xpath'); var xmldom = require('xmldom'); var doc = new xmldom.DOMParser().parseFromString(xml); var nodes = xpath.select(userinput, doc); // Sensitive var node = xpath.select1(userinput, doc); // Sensitive // === Client side === // Chrome, Firefox, Edge, Opera, and Safari use the evaluate() method to select nodes: var nodes = document.evaluate(userinput, xmlDoc, null, XPathResult.ANY_TYPE, null); // Sensitive // Internet Explorer uses its own methods to select nodes: var nodes = xmlDoc.selectNodes(userinput); // Sensitive var node = xmlDoc.SelectSingleNode(userinput); // Sensitive See |
||||||||||||
typescript:S4818 |
This rule is deprecated, and will eventually be removed. Using sockets is security-sensitive. It has led in the past to the following vulnerabilities: Sockets are vulnerable in multiple ways:
This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleconst net = require('net'); var socket = new net.Socket(); // Sensitive socket.connect(80, 'google.com'); // net.createConnection creates a new net.Socket, initiates connection with socket.connect(), then returns the net.Socket that starts the connection net.createConnection({ port: port }, () => {}); // Sensitive // net.connect is an alias to net.createConnection net.connect({ port: port }, () => {}); // Sensitive See |
||||||||||||
typescript:S1523 |
Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities: Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security. This rule raises issues on calls to The rule also flags string literals starting with Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesRegarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser). Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way. Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer. Sensitive Code Examplelet value = eval('obj.' + propName); // Sensitive let func = Function('obj' + propName); // Sensitive location.href = 'javascript:void(0)'; // Sensitive ExceptionsThis rule will not raise an issue when the argument of the See |
||||||||||||
typescript:S1525 |
This rule is deprecated; use S4507 instead. Why is this an issue?The debugger statement can be placed anywhere in procedures to suspend execution. Using the debugger statement is similar to setting a breakpoint in the code. By definition such statement must absolutely be removed from the source code to prevent any unexpected behavior or added vulnerability to attacks in production. Noncompliant code examplefor (i = 1; i<5; i++) { // Print i to the Output window. Debug.write("loop index is " + i); // Wait for user to resume. debugger; } Compliant solutionfor (i = 1; i<5; i++) { // Print i to the Output window. Debug.write("loop index is " + i); } Resources |
||||||||||||
typescript:S2612 |
In Unix file system permissions, the " Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesThe most restrictive possible permissions should be assigned to files and directories. Sensitive Code ExampleNode.js const fs = require('fs'); fs.chmodSync("/tmp/fs", 0o777); // Sensitive const fs = require('fs'); const fsPromises = fs.promises; fsPromises.chmod("/tmp/fsPromises", 0o777); // Sensitive const fs = require('fs'); const fsPromises = fs.promises async function fileHandler() { let filehandle; try { filehandle = fsPromises.open('/tmp/fsPromises', 'r'); filehandle.chmod(0o777); // Sensitive } finally { if (filehandle !== undefined) filehandle.close(); } } Node.js const process = require('process'); process.umask(0o000); // Sensitive Compliant SolutionNode.js const fs = require('fs'); fs.chmodSync("/tmp/fs", 0o770); // Compliant const fs = require('fs'); const fsPromises = fs.promises; fsPromises.chmod("/tmp/fsPromises", 0o770); // Compliant const fs = require('fs'); const fsPromises = fs.promises async function fileHandler() { let filehandle; try { filehandle = fsPromises.open('/tmp/fsPromises', 'r'); filehandle.chmod(0o770); // Compliant } finally { if (filehandle !== undefined) filehandle.close(); } } Node.js const process = require('process'); process.umask(0o007); // Compliant See
|
||||||||||||
typescript:S4721 |
Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesUse functions that don’t spawn a shell. Sensitive Code Exampleconst cp = require('child_process'); // A shell will be spawn in these following cases: cp.exec(cmd); // Sensitive cp.execSync(cmd); // Sensitive cp.spawn(cmd, { shell: true }); // Sensitive cp.spawnSync(cmd, { shell: true }); // Sensitive cp.execFile(cmd, { shell: true }); // Sensitive cp.execFileSync(cmd, { shell: true }); // Sensitive Compliant Solutionconst cp = require('child_process'); cp.spawnSync("/usr/bin/file.exe", { shell: false }); // Compliant See |
||||||||||||
typescript:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code Exampleip = "192.168.12.42"; // Sensitive const net = require('net'); var client = new net.Socket(); client.connect(80, ip, function() { // ... }); Compliant Solutionip = process.env.IP_ADDRESS; // Compliant const net = require('net'); var client = new net.Socket(); client.connect(80, ip, function() { // ... }); ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See |
||||||||||||
typescript:S4829 |
This rule is deprecated, and will eventually be removed. Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities: It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated. This rule flags code that reads from the standard input. Ask Yourself Whether
You are at risk if you answered yes to this question. Recommended Secure Coding PracticesSanitize all data read from the standard input before using it. Sensitive Code Example// The process object is a global that provides information about, and control over, the current Node.js process // All uses of process.stdin are security-sensitive and should be reviewed process.stdin.on('readable', () => { const chunk = process.stdin.read(); // Sensitive if (chunk !== null) { dosomething(chunk); } }); const readline = require('readline'); readline.createInterface({ input: process.stdin // Sensitive }).on('line', (input) => { dosomething(input); }); See |
||||||||||||
typescript:S4823 |
This rule is deprecated, and will eventually be removed. Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities: Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized. Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure. This rule raises an issue when on every program entry points ( Ask Yourself Whether
If you answered yes to any of these questions you are at risk. Recommended Secure Coding PracticesSanitize all command line arguments before using them. Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information. Sensitive Code Example// The process object is a global that provides information about, and control over, the current Node.js process var param = process.argv[2]; // Sensitive: check how the argument is used console.log('Param: ' + param); See |
||||||||||||
typescript:S4830 |
This vulnerability makes it possible that an encrypted communication is intercepted. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be. When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. Loss of data integrityWhen TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system. How to fix it in Node.jsCode examplesThe following code contains examples of disabled certificate validation. The certificate validation gets disabled by setting Noncompliant code exampleconst https = require('node:https'); let options = { hostname: 'www.example.com', port: 443, path: '/', method: 'GET', rejectUnauthorized: false, secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { res.on('data', (d) => { process.stdout.write(d); }); }); // Noncompliant const tls = require('node:tls'); let options = { rejectUnauthorized: false, secureProtocol: 'TLSv1_2_method' }; let socket = tls.connect(443, "www.example.com", options, () => { process.stdin.pipe(socket); process.stdin.resume(); }); // Noncompliant Compliant solutionconst https = require('node:https'); let options = { hostname: 'www.example.com', port: 443, path: '/', method: 'GET', secureProtocol: 'TLSv1_2_method' }; let req = https.request(options, (res) => { res.on('data', (d) => { process.stdout.write(d); }); }); const tls = require('node:tls'); let options = { secureProtocol: 'TLSv1_2_method' }; let socket = tls.connect(443, "www.example.com", options, () => { process.stdin.pipe(socket); process.stdin.resume(); }); How does this work?Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation. To avoid running into problems with invalid certificates, consider the following sections. Using trusted certificatesIf possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration. Working with self-signed certificates or non-standard CAsIn some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store. ResourcesStandards
|
||||||||||||
typescript:S6268 |
Angular prevents XSS vulnerabilities by treating all values as untrusted by default. Untrusted values are systematically sanitized by the framework before they are inserted into the DOM. Still, developers have the ability to manually mark a value as trusted if they are sure that the value is already sanitized. Accidentally trusting malicious data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleimport { Component, OnInit } from '@angular/core'; import { DomSanitizer, SafeHtml } from "@angular/platform-browser"; import { ActivatedRoute } from '@angular/router'; @Component({ template: '<div id="hello" [innerHTML]="hello"></div>' }) export class HelloComponent implements OnInit { hello: SafeHtml; constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { } ngOnInit(): void { let name = this.route.snapshot.queryParams.name; let html = "<h1>Hello " + name + "</h1>"; this.hello = this.sanitizer.bypassSecurityTrustHtml(html); // Sensitive } } Compliant Solutionimport { Component, OnInit } from '@angular/core'; import { DomSanitizer } from "@angular/platform-browser"; import { ActivatedRoute } from '@angular/router'; @Component({ template: '<div id="hello"><h1>Hello {{name}}</h1></div>', }) export class HelloComponent implements OnInit { name: string; constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { } ngOnInit(): void { this.name = this.route.snapshot.queryParams.name; } } See |
||||||||||||
typescript:S5042 |
Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes). Ask Yourself WhetherArchives to expand are untrusted and:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFor tar module: const tar = require('tar'); tar.x({ // Sensitive file: 'foo.tar.gz' }); For adm-zip module: const AdmZip = require('adm-zip'); let zip = new AdmZip("./foo.zip"); zip.extractAllTo("."); // Sensitive For jszip module: const fs = require("fs"); const JSZip = require("jszip"); fs.readFile("foo.zip", function(err, data) { if (err) throw err; JSZip.loadAsync(data).then(function (zip) { // Sensitive zip.forEach(function (relativePath, zipEntry) { if (!zip.file(zipEntry.name)) { fs.mkdirSync(zipEntry.name); } else { zip.file(zipEntry.name).async('nodebuffer').then(function (content) { fs.writeFileSync(zipEntry.name, content); }); } }); }); }); For yauzl module const yauzl = require('yauzl'); yauzl.open('foo.zip', function (err, zipfile) { if (err) throw err; zipfile.on("entry", function(entry) { zipfile.openReadStream(entry, function(err, readStream) { if (err) throw err; // TODO: extract }); }); }); For extract-zip module: const extract = require('extract-zip') async function main() { let target = __dirname + '/test'; await extract('test.zip', { dir: target }); // Sensitive } main(); Compliant SolutionFor tar module: const tar = require('tar'); const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB let fileCount = 0; let totalSize = 0; tar.x({ file: 'foo.tar.gz', filter: (path, entry) => { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } totalSize += entry.size; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } return true; } }); For adm-zip module: const AdmZip = require('adm-zip'); const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB const THRESHOLD_RATIO = 10; let fileCount = 0; let totalSize = 0; let zip = new AdmZip("./foo.zip"); let zipEntries = zip.getEntries(); zipEntries.forEach(function(zipEntry) { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } let entrySize = zipEntry.getData().length; totalSize += entrySize; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } let compressionRatio = entrySize / zipEntry.header.compressedSize; if (compressionRatio > THRESHOLD_RATIO) { throw 'Reached max. compression ratio'; } if (!zipEntry.isDirectory) { zip.extractEntryTo(zipEntry.entryName, "."); } }); For jszip module: const fs = require("fs"); const pathmodule = require("path"); const JSZip = require("jszip"); const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB let fileCount = 0; let totalSize = 0; let targetDirectory = __dirname + '/archive_tmp'; fs.readFile("foo.zip", function(err, data) { if (err) throw err; JSZip.loadAsync(data).then(function (zip) { zip.forEach(function (relativePath, zipEntry) { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } // Prevent ZipSlip path traversal (S6096) const resolvedPath = pathmodule.join(targetDirectory, zipEntry.name); if (!resolvedPath.startsWith(targetDirectory)) { throw 'Path traversal detected'; } if (!zip.file(zipEntry.name)) { fs.mkdirSync(resolvedPath); } else { zip.file(zipEntry.name).async('nodebuffer').then(function (content) { totalSize += content.length; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } fs.writeFileSync(resolvedPath, content); }); } }); }); }); Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure. For yauzl module const yauzl = require('yauzl'); const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB const THRESHOLD_RATIO = 10; yauzl.open('foo.zip', function (err, zipfile) { if (err) throw err; let fileCount = 0; let totalSize = 0; zipfile.on("entry", function(entry) { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } // The uncompressedSize comes from the zip headers, so it might not be trustworthy. // Alternatively, calculate the size from the readStream. let entrySize = entry.uncompressedSize; totalSize += entrySize; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } if (entry.compressedSize > 0) { let compressionRatio = entrySize / entry.compressedSize; if (compressionRatio > THRESHOLD_RATIO) { throw 'Reached max. compression ratio'; } } zipfile.openReadStream(entry, function(err, readStream) { if (err) throw err; // TODO: extract }); }); }); Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure. For extract-zip module: const extract = require('extract-zip') const MAX_FILES = 10000; const MAX_SIZE = 1000000000; // 1 GB const THRESHOLD_RATIO = 10; async function main() { let fileCount = 0; let totalSize = 0; let target = __dirname + '/foo'; await extract('foo.zip', { dir: target, onEntry: function(entry, zipfile) { fileCount++; if (fileCount > MAX_FILES) { throw 'Reached max. number of files'; } // The uncompressedSize comes from the zip headers, so it might not be trustworthy. // Alternatively, calculate the size from the readStream. let entrySize = entry.uncompressedSize; totalSize += entrySize; if (totalSize > MAX_SIZE) { throw 'Reached max. size'; } if (entry.compressedSize > 0) { let compressionRatio = entrySize / entry.compressedSize; if (compressionRatio > THRESHOLD_RATIO) { throw 'Reached max. compression ratio'; } } } }); } main(); See
|
||||||||||||
typescript:S6245 |
This rule is deprecated, and will eventually be removed. Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself. There are three SSE options:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys. Sensitive Code ExampleServer-side encryption is not used: const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'default' }); // Sensitive Bucket encryption is disabled by default. Compliant SolutionServer-side encryption with Amazon S3-Managed Keys is used: const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { encryption: s3.BucketEncryption.KMS_MANAGED }); # Alternatively with a KMS key managed by the user. new s3.Bucket(this, 'id', { encryption: s3.BucketEncryption.KMS_MANAGED, encryptionKey: access_key }); See
|
||||||||||||
typescript:S6249 |
By default, S3 buckets can be accessed through HTTP and HTTPs protocols. As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enforce HTTPS only access by setting Sensitive Code ExampleS3 bucket objects access through TLS is not enforced by default: const s3 = require('aws-cdk-lib/aws-s3'); const bucket = new s3.Bucket(this, 'example'); // Sensitive Compliant Solutionconst s3 = require('aws-cdk-lib/aws-s3'); const bucket = new s3.Bucket(this, 'example', { bucketName: 'example', versioned: true, publicReadAccess: false, enforceSSL: true }); See
|
||||||||||||
typescript:S6265 |
Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users. The following canned ACLs are security-sensitive:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege policy, i.e., to only grant users the necessary permissions for their required tasks. In the
context of canned ACL, set it to Sensitive Code ExampleAll users, either authenticated or anonymous, have read and write permissions with the const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'bucket', { accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive }); new s3deploy.BucketDeployment(this, 'DeployWebsite', { accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive }); Compliant SolutionWith the const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'bucket', { accessControl: s3.BucketAccessControl.PRIVATE }); new s3deploy.BucketDeployment(this, 'DeployWebsite', { accessControl: s3.BucketAccessControl.PRIVATE }); See
|
||||||||||||
typescript:S6270 |
Resource-based policies granting access to all users can lead to information leakage. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges. Sensitive Code ExampleThis policy allows all users, including anonymous ones, to access an S3 bucket: import { aws_iam as iam } from 'aws-cdk-lib' import { aws_s3 as s3 } from 'aws-cdk-lib' const bucket = new s3.Bucket(this, "ExampleBucket") bucket.addToResourcePolicy(new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["s3:*"], resources: [bucket.arnForObjects("*")], principals: [new iam.AnyPrincipal()] // Sensitive })) Compliant SolutionThis policy allows only the authorized users: import { aws_iam as iam } from 'aws-cdk-lib' import { aws_s3 as s3 } from 'aws-cdk-lib' const bucket = new s3.Bucket(this, "ExampleBucket") bucket.addToResourcePolicy(new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["s3:*"], resources: [bucket.arnForObjects("*")], principals: [new iam.AccountRootPrincipal()] })) See
|
||||||||||||
typescript:S6275 |
Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade. Sensitive Code Exampleimport { Size } from 'aws-cdk-lib'; import { Volume } from 'aws-cdk-lib/aws-ec2'; new Volume(this, 'unencrypted-explicit', { availabilityZone: 'us-west-2a', size: Size.gibibytes(1), encrypted: false // Sensitive }); import { Size } from 'aws-cdk-lib'; import { Volume } from 'aws-cdk-lib/aws-ec2'; new Volume(this, 'unencrypted-implicit', { availabilityZone: 'eu-west-1a', size: Size.gibibytes(1), }); // Sensitive as encryption is disabled by default Compliant Solutionimport { Size } from 'aws-cdk-lib'; import { Volume } from 'aws-cdk-lib/aws-ec2'; new Volume(this, 'encrypted-explicit', { availabilityZone: 'eu-west-1a', size: Size.gibibytes(1), encrypted: true }); See |
||||||||||||
typescript:S2817 |
This rule is deprecated, and will eventually be removed. Why is this an issue?The Web SQL Database standard never saw the light of day. It was first formulated, then deprecated by the W3C and was only implemented in some browsers. (It is not supported in Firefox or IE.) Further, the use of a Web SQL Database poses security concerns, since you only need its name to access such a database. Noncompliant code examplevar db = window.openDatabase("myDb", "1.0", "Personal secrets stored here", 2*1024*1024); // Noncompliant Resources |
||||||||||||
typescript:S2819 |
Cross-origin communication allows different websites to interact with each other. This interaction is typically achieved through mechanisms like AJAX requests, WebSockets, or postMessage API. However, a vulnerability can arise when these communications are not properly secured by verifying their origins. Why is this an issue?Without origin verification, the target website cannot distinguish between legitimate requests from its own pages and malicious requests from an attacker’s site. The attacker can craft a malicious website or script that sends requests to a target website where the user is already authenticated. This vulnerability class is not about a single specific user input or action, but rather a series of actions that lead to an insecure cross-origin communication. What is the potential impact?The absence of origin verification during cross-origin communications can lead to serious security issues. Data BreachIf an attacker can successfully exploit this vulnerability, they may gain unauthorized access to sensitive data. For instance, a user’s personal information, financial details, or other confidential data could be exposed. This not only compromises the user’s privacy but can also lead to identity theft or financial loss. Unauthorized ActionsAn attacker could manipulate the communication between websites to perform actions on behalf of the user without their knowledge. This could range from making unauthorized purchases to changing user settings or even deleting accounts. How to fix itWhen sending a message, avoid using Code examplesNoncompliant code exampleWhen sending a message: var iframe = document.getElementById("testiframe"); iframe.contentWindow.postMessage("hello", "*"); // Noncompliant: * is used When receiving a message: window.addEventListener("message", function(event) { // Noncompliant: no checks are done on the origin property. console.log(event.data); }); Compliant solutionWhen sending a message: var iframe = document.getElementById("testiframe"); iframe.contentWindow.postMessage("hello", "https://secure.example.com"); When receiving a message: window.addEventListener("message", function(event) { if (event.origin !== "http://example.org") return; console.log(event.data) }); ResourcesDocumentation
Standards |
||||||||||||
typescript:S6252 |
S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket. It can lead to unintentional or intentional information loss. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object. Sensitive Code Exampleconst s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', versioned: false // Sensitive }); The default value of Compliant Solutionconst s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', versioned: true }); See
|
||||||||||||
typescript:S6281 |
By default S3 buckets are private, it means that only the bucket owner can access it. This access control can be relaxed with ACLs or policies. To prevent permissive policies or ACLs to be set on a S3 bucket the following booleans settings can be enabled:
The other attribute However, all of those options can be enabled by setting the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to configure:
Sensitive Code ExampleBy default, when not set, the const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket' }); // Sensitive This const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', blockPublicAccess: new s3.BlockPublicAccess({ blockPublicAcls : false, // Sensitive blockPublicPolicy : true, ignorePublicAcls : true, restrictPublicBuckets : true }) }); The attribute const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', blockPublicAccess: s3.BlockPublicAccess.BLOCK_ACLS // Sensitive }); Compliant SolutionThis const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL }); A similar configuration to the one above can be obtained by setting all parameters of the const s3 = require('aws-cdk-lib/aws-s3'); new s3.Bucket(this, 'id', { bucketName: 'bucket', blockPublicAccess: new s3.BlockPublicAccess({ blockPublicAcls : true, blockPublicPolicy : true, ignorePublicAcls : true, restrictPublicBuckets : true }) }); See
|
||||||||||||
typescript:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplevar mysql = require('mysql'); var connection = mysql.createConnection( { host:'localhost', user: "admin", database: "project", password: "mypassword", // sensitive multipleStatements: true }); connection.connect(); Compliant Solutionvar mysql = require('mysql'); var connection = mysql.createConnection({ host: process.env.MYSQL_URL, user: process.env.MYSQL_USERNAME, password: process.env.MYSQL_PASSWORD, database: process.env.MYSQL_DATABASE }); connection.connect(); See
|
||||||||||||
typescript:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code Exampleurl = "http://example.com"; // Sensitive url = "ftp://anonymous@example.com"; // Sensitive url = "telnet://anonymous@example.com"; // Sensitive For nodemailer: const nodemailer = require("nodemailer"); let transporter = nodemailer.createTransport({ secure: false, // Sensitive requireTLS: false // Sensitive }); const nodemailer = require("nodemailer"); let transporter = nodemailer.createTransport({}); // Sensitive For ftp: var Client = require('ftp'); var c = new Client(); c.connect({ 'secure': false // Sensitive }); For telnet-client: const Telnet = require('telnet-client'); // Sensitive For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer: import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; const alb = new ApplicationLoadBalancer(this, 'ALB', { vpc: vpc, internetFacing: true }); alb.addListener('listener-http-default', { port: 8080, open: true }); // Sensitive alb.addListener('listener-http-explicit', { protocol: ApplicationProtocol.HTTP, // Sensitive port: 8080, open: true }); For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener: import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new ApplicationListener(this, 'listener-http-explicit-constructor', { loadBalancer: alb, protocol: ApplicationProtocol.HTTP, // Sensitive port: 8080, open: true }); For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer: import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; const nlb = new NetworkLoadBalancer(this, 'nlb', { vpc: vpc, internetFacing: true }); var listenerNLB = nlb.addListener('listener-tcp-default', { port: 1234 }); // Sensitive listenerNLB = nlb.addListener('listener-tcp-explicit', { protocol: Protocol.TCP, // Sensitive port: 1234 }); For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener: import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new NetworkListener(this, 'listener-tcp-explicit-constructor', { loadBalancer: nlb, protocol: Protocol.TCP, // Sensitive port: 8080 }); For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener: import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new CfnListener(this, 'listener-http', { defaultActions: defaultActions, loadBalancerArn: alb.loadBalancerArn, protocol: "HTTP", // Sensitive port: 80 }); new CfnListener(this, 'listener-tcp', { defaultActions: defaultActions, loadBalancerArn: alb.loadBalancerArn, protocol: "TCP", // Sensitive port: 80 }); For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer: import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing'; new CfnLoadBalancer(this, 'elb-tcp', { listeners: [{ instancePort: '1000', loadBalancerPort: '1000', protocol: 'tcp' // Sensitive }] }); new CfnLoadBalancer(this, 'elb-http', { listeners: [{ instancePort: '1000', loadBalancerPort: '1000', protocol: 'http' // Sensitive }] }); For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer: import { LoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing'; const loadBalancer = new LoadBalancer(this, 'elb-tcp-dict', { vpc, internetFacing: true, healthCheck: { port: 80, }, listeners: [ { externalPort:10000, externalProtocol: LoadBalancingProtocol.TCP, // Sensitive internalPort:10000 }] }); loadBalancer.addListener({ externalPort:10001, externalProtocol:LoadBalancingProtocol.TCP, // Sensitive internalPort:10001 }); loadBalancer.addListener({ externalPort:10002, externalProtocol:LoadBalancingProtocol.HTTP, // Sensitive internalPort:10002 }); For aws-cdk-lib.aws-elasticache.CfnReplicationGroup: import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache'; new CfnReplicationGroup(this, 'unencrypted-implicit', { replicationGroupDescription: 'exampleDescription' }); // Sensitive new CfnReplicationGroup(this, 'unencrypted-explicit', { replicationGroupDescription: 'exampleDescription', transitEncryptionEnabled: false // Sensitive }); For aws-cdk-lib.aws-kinesis.CfnStream: import { CfnStream } from 'aws-cdk-lib/aws-kinesis'; new CfnStream(this, 'cfnstream-implicit-unencrytped', undefined); // Sensitive new CfnStream(this, 'cfnstream-explicit-unencrytped', { streamEncryption: undefined // Sensitive }); For aws-cdk-lib.aws-kinesis.Stream: import { Stream } from 'aws-cdk-lib/aws-kinesis'; new Stream(this, 'stream-explicit-unencrypted', { encryption: StreamEncryption.UNENCRYPTED // Sensitive }); Compliant Solutionurl = "https://example.com"; url = "sftp://anonymous@example.com"; url = "ssh://anonymous@example.com"; For nodemailer one of the following options must be set: const nodemailer = require("nodemailer"); let transporter = nodemailer.createTransport({ secure: true, requireTLS: true, port: 465, secured: true }); For ftp: var Client = require('ftp'); var c = new Client(); c.connect({ 'secure': true }); For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer: import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; const alb = new ApplicationLoadBalancer(this, 'ALB', { vpc: vpc, internetFacing: true }); alb.addListener('listener-https-explicit', { protocol: ApplicationProtocol.HTTPS, port: 8080, open: true, certificates: [certificate] }); alb.addListener('listener-https-implicit', { port: 8080, open: true, certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener: import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new ApplicationListener(this, 'listener-https-explicit', { loadBalancer: loadBalancer, protocol: ApplicationProtocol.HTTPS, port: 8080, open: true, certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer: import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; const nlb = new NetworkLoadBalancer(this, 'nlb', { vpc: vpc, internetFacing: true }); nlb.addListener('listener-tls-explicit', { protocol: Protocol.TLS, port: 1234, certificates: [certificate] }); nlb.addListener('listener-tls-implicit', { port: 1234, certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener: import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new NetworkListener(this, 'listener-tls-explicit', { loadBalancer: loadBalancer, protocol: Protocol.TLS, port: 8080, certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener: import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2'; new CfnListener(this, 'listener-https', { defaultActions: defaultActions, loadBalancerArn: loadBalancerArn, protocol: "HTTPS", port: 80 certificates: [certificate] }); new CfnListener(this, 'listener-tls', { defaultActions: defaultActions, loadBalancerArn: loadBalancerArn, protocol: "TLS", port: 80 certificates: [certificate] }); For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer: import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing'; new CfnLoadBalancer(this, 'elb-ssl', { listeners: [{ instancePort: '1000', loadBalancerPort: '1000', protocol: 'ssl', sslCertificateId: sslCertificateId }] }); new CfnLoadBalancer(this, 'elb-https', { listeners: [{ instancePort: '1000', loadBalancerPort: '1000', protocol: 'https', sslCertificateId: sslCertificateId }] }); For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer: import { LoadBalancer, LoadBalancingProtocol } from 'aws-cdk-lib/aws-elasticloadbalancing'; const lb = new LoadBalancer(this, 'elb-ssl', { vpc, internetFacing: true, healthCheck: { port: 80, }, listeners: [ { externalPort:10000, externalProtocol:LoadBalancingProtocol.SSL, internalPort:10000 }] }); lb.addListener({ externalPort:10001, externalProtocol:LoadBalancingProtocol.SSL, internalPort:10001 }); lb.addListener({ externalPort:10002, externalProtocol:LoadBalancingProtocol.HTTPS, internalPort:10002 }); For aws-cdk-lib.aws-elasticache.CfnReplicationGroup: import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache'; new CfnReplicationGroup(this, 'encrypted-explicit', { replicationGroupDescription: 'example', transitEncryptionEnabled: true }); For aws-cdk-lib.aws-kinesis.Stream: import { Stream } from 'aws-cdk-lib/aws-kinesis'; new Stream(this, 'stream-implicit-encrypted'); new Stream(this, 'stream-explicit-encrypted-selfmanaged', { encryption: StreamEncryption.KMS, encryptionKey: encryptionKey, }); new Stream(this, 'stream-explicit-encrypted-managed', { encryption: StreamEncryption.MANAGED }); For aws-cdk-lib.aws-kinesis.CfnStream: import { CfnStream } from 'aws-cdk-lib/aws-kinesis'; new CfnStream(this, 'cfnstream-explicit-encrypted', { streamEncryption: { encryptionType: encryptionType, keyId: encryptionKey.keyId, } }); ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
typescript:S6299 |
Vue.js framework prevents XSS vulnerabilities by automatically escaping HTML contents with the use of native API browsers like
It’s still possible to explicity use Ask Yourself WhetherThe application needs to render HTML content which:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleWhen using Vue.js templates, the <div v-html="htmlContent"></div> <!-- Noncompliant --> When using a rendering function, the Vue.component('element', { render: function (createElement) { return createElement( 'div', { domProps: { innerHTML: this.htmlContent, // Noncompliant } } ); }, }); When using JSX, the <div domPropsInnerHTML={this.htmlContent}></div> <!-- Noncompliant --> Compliant SolutionWhen using Vue.js templates, putting the content as a child node of the element is safe: <div>{{ htmlContent }}</div> When using a rendering function, using the Vue.component('element', { render: function (createElement) { return createElement( 'div', { domProps: { innerText: this.htmlContent, } }, this.htmlContent // Child node ); }, }); When using JSX, putting the content as a child node of the element is safe: <div>{this.htmlContent}</div> See |
||||||||||||
typescript:S6304 |
A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur. Ask Yourself WhetherThe AWS account has more than one resource with different levels of sensitivity. A risk exists if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors. Sensitive Code ExampleThe wildcard import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyDocument({ statements: [ new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["iam:CreatePolicyVersion"], resources: ["*"] // Sensitive }) ] }) Compliant SolutionRestrict the update permission to the appropriate subset of policies: import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyDocument({ statements: [ new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["iam:CreatePolicyVersion"], resources: ["arn:aws:iam:::policy/team1/*"] }) ] }) Exceptions
See
|
||||||||||||
typescript:S2077 |
Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example// === MySQL === const mysql = require('mysql'); const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db }); mycon.connect(function(err) { mycon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive }); // === PostgreSQL === const pg = require('pg'); const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db }); pgcon.connect(); pgcon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive Compliant Solution// === MySQL === const mysql = require('mysql'); const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db }); mycon.connect(function(err) { mycon.query('SELECT name FROM users WHERE id = ?', [userinput], (err, res) => {}); }); // === PostgreSQL === const pg = require('pg'); const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db }); pgcon.connect(); pgcon.query('SELECT name FROM users WHERE id = $1', [userinput], (err, res) => {}); ExceptionsThis rule’s current implementation does not follow variables. It will only detect SQL queries which are formatted directly in the function call. const sql = 'SELECT * FROM users WHERE id = ' + userinput; mycon.query(sql, (err, res) => {}); // Sensitive but no issue is raised. See
|
||||||||||||
typescript:S5691 |
Hidden files are created automatically by many tools to save user-preferences, well-known examples are Outside of the user environment, hidden files are sensitive because they are used to store privacy-related information or even hard-coded secrets. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleExpress.js serve-static middleware: let serveStatic = require("serve-static"); let app = express(); let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'allow'}); // Sensitive app.use(serveStaticMiddleware); Compliant SolutionExpress.js serve-static middleware: let serveStatic = require("serve-static"); let app = express(); let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'ignore'}); // Compliant: ignore or deny are recommended values let serveStaticDefault = serveStatic('public', { 'index': false}); // Compliant: by default, "dotfiles" (file or directory that begins with a dot) are not served (with the exception that files within a directory that begins with a dot are not ignored), see serve-static module documentation app.use(serveStaticMiddleware); See
|
||||||||||||
typescript:S5693 |
Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to customize the rule with the limit values that correspond to the web application. Sensitive Code Exampleformidable file upload module: const form = new Formidable(); form.maxFileSize = 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB const formDefault = new Formidable(); // Sensitive, the default value is 200MB multer (Express.js middleware) file upload module: let diskUpload = multer({ storage: diskStorage, limits: { fileSize: 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB } }); let diskUploadUnlimited = multer({ // Sensitive: the default value is no limit storage: diskStorage, }); body-parser module: // 4MB is more than the recommended limit of 2MB for non-file-upload requests let jsonParser = bodyParser.json({ limit: "4mb" }); // Sensitive let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "4mb" }); // Sensitive Compliant Solutionformidable file upload module: const form = new Formidable(); form.maxFileSize = 8000000; // Compliant: 8MB multer (Express.js middleware) file upload module: let diskUpload = multer({ storage: diskStorage, limits: { fileSize: 8000000 // Compliant: 8MB } }); body-parser module: let jsonParser = bodyParser.json(); // Compliant, when the limit is not defined, the default value is set to 100kb let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "2mb" }); // Compliant See
|
||||||||||||
typescript:S6302 |
A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information. Ask Yourself WhetherIdentities obtaining all the permissions:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used. Sensitive Code ExampleA customer-managed policy that grants all permissions by using the wildcard (*) in the import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["*"], // Sensitive resources: ["arn:aws:iam:::user/*"], }) Compliant SolutionA customer-managed policy that grants only the required permissions: import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["iam:GetAccountSummary"], resources: ["arn:aws:iam:::user/*"], }) See
|
||||||||||||
typescript:S6303 |
Using unencrypted RDS DB resources exposes data to unauthorized access. This situation can occur in a variety of scenarios, such as:
After a successful intrusion, the underlying applications are exposed to:
AWS-managed encryption at rest reduces this risk with a simple switch. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine. Sensitive Code ExampleFor import { aws_rds as rds } from 'aws-cdk-lib'; new rds.CfnDBCluster(this, 'example', { storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; new rds.CfnDBInstance(this, 'example', { storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; import { aws_ec2 as ec2 } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; const cluster = new rds.DatabaseCluster(this, 'example', { engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }), instanceProps: { vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS, }, vpc, }, storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; new rds.DatabaseClusterFromSnapshot(this, 'example', { engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }), instanceProps: { vpc, }, snapshotIdentifier: 'exampleSnapshot', storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; new rds.DatabaseInstance(this, 'example', { engine: rds.DatabaseInstanceEngine.POSTGRES, vpc, storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const sourceInstance: rds.DatabaseInstance; new rds.DatabaseInstanceReadReplica(this, 'example', { sourceDatabaseInstance: sourceInstance, instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE), vpc, storageEncrypted: false, // Sensitive }); Compliant SolutionFor import { aws_rds as rds } from 'aws-cdk-lib'; new rds.CfnDBCluster(this, 'example', { storageEncrypted: true, }); For import { aws_rds as rds } from 'aws-cdk-lib'; new rds.CfnDBInstance(this, 'example', { storageEncrypted: true, }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; const cluster = new rds.DatabaseCluster(this, 'example', { engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }), instanceProps: { vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS, }, vpc, }, storageEncrypted: false, // Sensitive }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; new rds.DatabaseClusterFromSnapshot(this, 'example', { engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }), instanceProps: { vpc, }, snapshotIdentifier: 'exampleSnapshot', storageEncrypted: true, }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const vpc: ec2.Vpc; new rds.DatabaseInstance(this, 'example', { engine: rds.DatabaseInstanceEngine.POSTGRES, vpc, storageEncrypted: true, }); For import { aws_rds as rds } from 'aws-cdk-lib'; declare const sourceInstance: rds.DatabaseInstance; new rds.DatabaseInstanceReadReplica(this, 'example', { sourceDatabaseInstance: sourceInstance, instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE), vpc, storageEncrypted: true, }); See
|
||||||||||||
typescript:S6308 |
Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated. To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:
Thus, adversaries cannot access the data if they gain physical access to the storage medium. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to encrypt OpenSearch domains that contain sensitive information. OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary. Sensitive Code ExampleFor aws-cdk-lib.aws_opensearchservice.Domain: import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib'; const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', { version: EngineVersion.OPENSEARCH_1_3, }); // Sensitive, encryption must be explicitly enabled For aws-cdk-lib.aws_opensearchservice.CfnDomain: import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib'; const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', { engineVersion: 'OpenSearch_1.3', }); // Sensitive, encryption must be explicitly enabled Compliant SolutionFor aws-cdk-lib.aws_opensearchservice.Domain: import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib'; const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', { version: EngineVersion.OPENSEARCH_1_3, encryptionAtRest: { enabled: true, }, }); For aws-cdk-lib.aws_opensearchservice.CfnDomain: import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib'; const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', { engineVersion: 'OpenSearch_1.3', encryptionAtRestOptions: { enabled: true, }, }); See
|
||||||||||||
typescript:S6317 |
Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access. Why is this an issue?AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources. For such policies, it is easy to define very broad permissions (by using wildcard If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities. What is the potential impact?AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope. Privilege escalationWhen IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities. For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account. How to fix it in AWS CDKCode examplesIn this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges. Noncompliant code exampleimport { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyDocument({ statements: [new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["lambda:UpdateFunctionCode"], resources: ["*"], // Noncompliant })], }); Compliant solutionThe policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed. import { aws_iam as iam } from 'aws-cdk-lib' new iam.PolicyDocument({ statements: [new iam.PolicyStatement({ effect: iam.Effect.ALLOW, actions: ["lambda:UpdateFunctionCode"], resources: ["arn:aws:lambda:us-east-2:123456789012:function:my-function:1"], })], }); How does this work?Principle of least privilegeWhen creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else. To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used. ResourcesDocumentation
Articles & blog posts
Standards |
||||||||||||
typescript:S6319 |
Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary. Sensitive Code ExampleFor import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker'; new CfnNotebookInstance(this, 'example', { instanceType: 'instanceType', roleArn: 'roleArn' }); // Sensitive Compliant SolutionFor import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker'; const encryptionKey = new Key(this, 'example', { enableKeyRotation: true, }); new CfnNotebookInstance(this, 'example', { instanceType: 'instanceType', roleArn: 'roleArn', kmsKeyId: encryptionKey.keyId }); See |
||||||||||||
typescript:S5443 |
Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like
In the past, it has led to the following vulnerabilities: This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleconst fs = require('fs'); let tmp_file = "/tmp/temporary_file"; // Sensitive fs.readFile(tmp_file, 'utf8', function (err, data) { // ... }); const fs = require('fs'); let tmp_dir = process.env.TMPDIR; // Sensitive fs.readFile(tmp_dir + "/temporary_file", 'utf8', function (err, data) { // ... }); Compliant Solutionconst tmp = require('tmp'); const tmpobj = tmp.fileSync(); // Compliant See
|
||||||||||||
typescript:S5689 |
Disclosure of version information, usually overlooked by developers but disclosed by default by the systems and frameworks in use, can pose a significant security risk depending on the production environement. Once this information is public, attackers can use it to identify potential security holes or vulnerabilities specific to that version. Furthermore, if the published version information indicates the use of outdated or unsupported software, it becomes easier for attackers to exploit known vulnerabilities. They can search for published vulnerabilities related to that version and launch attacks that specifically target those vulnerabilities. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesIn general, it is recommended to keep internal technical information within internal systems to control what attackers know about the underlying architectures. This is known as the "need to know" principle. The most effective solution is to remove version information disclosure from what end users can see, such as the "x-powered-by" header. Disabling the server signature provides additional protection by reducing the amount of information available to attackers. Note, however, that
this does not provide as much protection as regular updates and patches. Sensitive Code ExampleIn Express.js, version information is disclosed by default in the let express = require('express'); let example = express(); // Sensitive example.get('/', function (req, res) { res.send('example') }); Compliant Solution
let express = require('express'); let example = express(); example.disable("x-powered-by"); Or with helmet’s hidePoweredBy middleware: let helmet = require("helmet"); let example = express(); example.use(helmet.hidePoweredBy()); See
|
||||||||||||
typescript:S5148 |
A newly opened window having access back to the originating window could allow basic phishing attacks (the For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesUse Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ Sensitive Code Examplewindow.open("https://example.com/dangerous"); // Sensitive Compliant Solutionwindow.open("https://example.com/dangerous", "WindowName", "noopener"); See |
||||||||||||
typescript:S6327 |
Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary. Sensitive Code Exampleimport { Topic } from 'aws-cdk-lib/aws-sns'; new Topic(this, 'exampleTopic'); // Sensitive import { Topic, CfnTopic } from 'aws-cdk-lib/aws-sns'; new CfnTopic(this, 'exampleCfnTopic'); // Sensitive Compliant Solutionimport { Topic } from 'aws-cdk-lib/aws-sns'; const encryptionKey = new Key(this, 'exampleKey', { enableKeyRotation: true, }); new Topic(this, 'exampleTopic', { masterKey: encryptionKey }); import { CfnTopic } from 'aws-cdk-lib/aws-sns'; const encryptionKey = new Key(this, 'exampleKey', { enableKeyRotation: true, }); cfnTopic = new CfnTopic(this, 'exampleCfnTopic', { kmsMasterKeyId: encryptionKey.keyId }); See |
||||||||||||
typescript:S6329 |
Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption. Depending on the component, inbound access from the Internet can be enabled via:
Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident. This decision increases the likelihood of attacks on the organization, such as:
Ask Yourself WhetherThis cloud resource:
There is a risk if you answered no to any of those questions. Recommended Secure Coding PracticesAvoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites. Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components. The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address. Sensitive Code ExampleFor aws-cdk-lib.aws_ec2.Instance and similar constructs: import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.Instance(this, "example", { instanceType: nanoT2, machineImage: ec2.MachineImage.latestAmazonLinux(), vpc: vpc, vpcSubnets: {subnetType: ec2.SubnetType.PUBLIC} // Sensitive }) For aws-cdk-lib.aws_ec2.CfnInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnInstance(this, "example", { instanceType: "t2.micro", imageId: "ami-0ea0f26a6d50850c5", networkInterfaces: [ { deviceIndex: "0", associatePublicIpAddress: true, // Sensitive deleteOnTermination: true, subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PUBLIC}).subnetIds[0] } ] }) For aws-cdk-lib.aws_dms.CfnReplicationInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' new dms.CfnReplicationInstance( this, "example", { replicationInstanceClass: "dms.t2.micro", allocatedStorage: 5, publiclyAccessible: true, // Sensitive replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier, vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup] }) For aws-cdk-lib.aws_rds.CfnDBInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' const rdsSubnetGroupPublic = new rds.CfnDBSubnetGroup(this, "publicSubnet", { dbSubnetGroupDescription: "Subnets", dbSubnetGroupName: "publicSn", subnetIds: vpc.selectSubnets({ subnetType: ec2.SubnetType.PUBLIC }).subnetIds }) new rds.CfnDBInstance(this, "example", { engine: "postgres", masterUsername: "foobar", masterUserPassword: "12345678", dbInstanceClass: "db.r5.large", allocatedStorage: "200", iops: 1000, dbSubnetGroupName: rdsSubnetGroupPublic.ref, publiclyAccessible: true, // Sensitive vpcSecurityGroups: [sg.securityGroupId] }) Compliant SolutionFor aws-cdk-lib.aws_ec2.Instance and similar constructs: import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.Instance( this, "example", { instanceType: nanoT2, machineImage: ec2.MachineImage.latestAmazonLinux(), vpc: vpc, vpcSubnets: {subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS} }) For aws-cdk-lib.aws_ec2.CfnInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnInstance(this, "example", { instanceType: "t2.micro", imageId: "ami-0ea0f26a6d50850c5", networkInterfaces: [ { deviceIndex: "0", associatePublicIpAddress: false, deleteOnTermination: true, subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}).subnetIds[0] } ] }) For aws-cdk-lib.aws_dms.CfnReplicationInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' new dms.CfnReplicationInstance( this, "example", { replicationInstanceClass: "dms.t2.micro", allocatedStorage: 5, publiclyAccessible: false, replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier, vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup] }) For aws-cdk-lib.aws_rds.CfnDBInstance: import {aws_ec2 as ec2} from 'aws-cdk-lib' const rdsSubnetGroupPrivate = new rds.CfnDBSubnetGroup(this, "example",{ dbSubnetGroupDescription: "Subnets", dbSubnetGroupName: "privateSn", subnetIds: vpc.selectSubnets({ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }).subnetIds }) new rds.CfnDBInstance(this, "example", { engine: "postgres", masterUsername: "foobar", masterUserPassword: "12345678", dbInstanceClass: "db.r5.large", allocatedStorage: "200", iops: 1000, dbSubnetGroupName: rdsSubnetGroupPrivate.ref, publiclyAccessible: false, vpcSecurityGroups: [sg.securityGroupId] }) See
|
||||||||||||
typescript:S4036 |
When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesFully qualified/absolute path should be used to specify the OS command to execute. Sensitive Code Exampleconst cp = require('child_process'); cp.exec('file.exe'); // Sensitive Compliant Solutionconst cp = require('child_process'); cp.exec('/usr/bin/file.exe'); // Compliant See |
||||||||||||
typescript:S5247 |
To reduce the risk of cross-site scripting attacks, templating systems, such as Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy
(which only transforms html characters into html entities) will not be relevant
when variables are used in a html attribute because ' <a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie) <a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack) Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesEnable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one. Sensitive Code Examplemustache.js template engine: let Mustache = require("mustache"); Mustache.escape = function(text) {return text;}; // Sensitive let rendered = Mustache.render(template, { name: inputName }); handlebars.js template engine: const Handlebars = require('handlebars'); let source = "<p>attack {{name}}</p>"; let template = Handlebars.compile(source, { noEscape: true }); // Sensitive markdown-it markup language parser: const markdownIt = require('markdown-it'); let md = markdownIt({ html: true // Sensitive }); let result = md.render('# <b>attack</b>'); marked markup language parser: const marked = require('marked'); marked.setOptions({ renderer: new marked.Renderer(), sanitize: false // Sensitive }); console.log(marked("# test <b>attack/b>")); kramed markup language parser: let kramed = require('kramed'); var options = { renderer: new kramed.Renderer({ sanitize: false // Sensitive }) }; Compliant Solutionmustache.js template engine: let Mustache = require("mustache"); let rendered = Mustache.render(template, { name: inputName }); // Compliant autoescaping is on by default handlebars.js template engine: const Handlebars = require('handlebars'); let source = "<p>attack {{name}}</p>"; let data = { "name": "<b>Alan</b>" }; let template = Handlebars.compile(source); // Compliant by default noEscape is set to false markdown-it markup language parser: let md = require('markdown-it')(); // Compliant by default html is set to false let result = md.render('# <b>attack</b>'); marked markup language parser: const marked = require('marked'); marked.setOptions({ renderer: new marked.Renderer() }); // Compliant by default sanitize is set to true console.log(marked("# test <b>attack/b>")); kramed markup language parser: let kramed = require('kramed'); let options = { renderer: new kramed.Renderer({ sanitize: true // Compliant }) }; console.log(kramed('Attack [xss?](javascript:alert("xss")).', options)); See
|
||||||||||||
typescript:S6321 |
Why is this an issue?Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and
outbound traffic. What is the potential impact?Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system. Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system. How to fix itIt is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers. Code examplesNoncompliant code exampleFor aws-cdk-lib.aws_ec2.Instance and other constructs
that support a import {aws_ec2 as ec2} from 'aws-cdk-lib' const instance = new ec2.Instance(this, "default-own-security-group",{ instanceType: nanoT2, machineImage: ec2.MachineImage.latestAmazonLinux(), vpc: vpc, instanceName: "test-instance" }) instance.connections.allowFrom( ec2.Peer.anyIpv4(), // Noncompliant ec2.Port.tcp(22), /*description*/ "Allows SSH from all IPv4" ) For aws-cdk-lib.aws_ec2.SecurityGroup import {aws_ec2 as ec2} from 'aws-cdk-lib' const securityGroup = new ec2.SecurityGroup(this, "custom-security-group", { vpc: vpc }) securityGroup.addIngressRule( ec2.Peer.anyIpv4(), // Noncompliant ec2.Port.tcpRange(1, 1024) ) For aws-cdk-lib.aws_ec2.CfnSecurityGroup import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnSecurityGroup( this, "cfn-based-security-group", { groupDescription: "cfn based security group", groupName: "cfn-based-security-group", vpcId: vpc.vpcId, securityGroupIngress: [ { ipProtocol: "6", cidrIp: "0.0.0.0/0", // Noncompliant fromPort: 22, toPort: 22 } ] } ) For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnSecurityGroupIngress( // Noncompliant this, "ingress-all-ip-tcp-ssh", { ipProtocol: "tcp", cidrIp: "0.0.0.0/0", fromPort: 22, toPort: 22, groupId: securityGroup.attrGroupId }) Compliant solutionFor aws-cdk-lib.aws_ec2.Instance and other constructs
that support a import {aws_ec2 as ec2} from 'aws-cdk-lib' const instance = new ec2.Instance(this, "default-own-security-group",{ instanceType: nanoT2, machineImage: ec2.MachineImage.latestAmazonLinux(), vpc: vpc, instanceName: "test-instance" }) instance.connections.allowFrom( ec2.Peer.ipv4("192.0.2.0/24"), ec2.Port.tcp(22), /*description*/ "Allows SSH from a trusted range" ) For aws-cdk-lib.aws_ec2.SecurityGroup import {aws_ec2 as ec2} from 'aws-cdk-lib' const securityGroup3 = new ec2.SecurityGroup(this, "custom-security-group", { vpc: vpc }) securityGroup3.addIngressRule( ec2.Peer.anyIpv4(), ec2.Port.tcpRange(1024, 1048) ) For aws-cdk-lib.aws_ec2.CfnSecurityGroup import {aws_ec2 as ec2} from 'aws-cdk-lib' new ec2.CfnSecurityGroup( this, "cfn-based-security-group", { groupDescription: "cfn based security group", groupName: "cfn-based-security-group", vpcId: vpc.vpcId, securityGroupIngress: [ { ipProtocol: "6", cidrIp: "192.0.2.0/24", fromPort: 22, toPort: 22 } ] } ) For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress new ec2.CfnSecurityGroupIngress( this, "ingress-all-ipv4-tcp-http", { ipProtocol: "6", cidrIp: "0.0.0.0/0", fromPort: 80, toPort: 80, groupId: securityGroup.attrGroupId } ) ResourcesDocumentation
Standards |
||||||||||||
typescript:S6330 |
Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary. Sensitive Code Exampleimport { Queue } from 'aws-cdk-lib/aws-sqs'; new Queue(this, 'example'); // Sensitive For import { CfnQueue } from 'aws-cdk-lib/aws-sqs'; new CfnQueue(this, 'example'); // Sensitive Compliant Solutionimport { Queue } from 'aws-cdk-lib/aws-sqs'; new Queue(this, 'example', { encryption: QueueEncryption.KMS_MANAGED }); For import { CfnQueue } from 'aws-cdk-lib/aws-sqs'; const encryptionKey = new Key(this, 'example', { enableKeyRotation: true, }); new CfnQueue(this, 'example', { kmsMasterKeyId: encryptionKey.keyId }); See
|
||||||||||||
typescript:S6333 |
Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure. Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API. Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIn general, prefer limiting API access to a specific set of people or entities. AWS provides multiple methods to do so:
Sensitive Code ExampleFor aws-cdk-lib.aws_apigateway.Resource: import {aws_apigateway as apigateway} from "aws-cdk-lib" const resource = api.root.addResource("example") resource.addMethod( "GET", new apigateway.HttpIntegration("https://example.org"), { authorizationType: apigateway.AuthorizationType.NONE // Sensitive } ) For aws-cdk-lib.aws_apigatewayv2.CfnRoute: import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib" new apigateway.CfnRoute(this, "no-auth", { apiId: api.ref, routeKey: "GET /no-auth", authorizationType: "NONE", // Sensitive target: exampleIntegration }) Compliant SolutionFor aws-cdk-lib.aws_apigateway.Resource: import {aws_apigateway as apigateway} from "aws-cdk-lib" const resource = api.root.addResource("example",{ defaultMethodOptions:{ authorizationType: apigateway.AuthorizationType.IAM } }) resource.addMethod( "POST", new apigateway.HttpIntegration("https://example.org"), { authorizationType: apigateway.AuthorizationType.IAM } ) resource.addMethod( // authorizationType is inherited from the Resource's configured defaultMethodOptions "GET" ) For aws-cdk-lib.aws_apigatewayv2.CfnRoute: import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib" new apigateway.CfnRoute(this, "auth", { apiId: api.ref, routeKey: "POST /auth", authorizationType: "AWS_IAM", target: exampleIntegration }) See
|
||||||||||||
typescript:S2092 |
When a cookie is protected with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplecookie-session module: let session = cookieSession({ secure: false,// Sensitive }); // Sensitive express-session module: const express = require('express'); const session = require('express-session'); let app = express(); app.use(session({ cookie: { secure: false // Sensitive } })); cookies module: let cookies = new Cookies(req, res, { keys: keys }); cookies.set('LastVisit', new Date().toISOString(), { secure: false // Sensitive }); // Sensitive csurf module: const cookieParser = require('cookie-parser'); const csrf = require('csurf'); const express = require('express'); let csrfProtection = csrf({ cookie: { secure: false }}); // Sensitive Compliant Solutioncookie-session module: let session = cookieSession({ secure: true,// Compliant }); // Compliant express-session module: const express = require('express'); const session = require('express-session'); let app = express(); app.use(session({ cookie: { secure: true // Compliant } })); cookies module: let cookies = new Cookies(req, res, { keys: keys }); cookies.set('LastVisit', new Date().toISOString(), { secure: true // Compliant }); // Compliant csurf module: const cookieParser = require('cookie-parser'); const csrf = require('csurf'); const express = require('express'); let csrfProtection = csrf({ cookie: { secure: true }}); // Compliant See
|
||||||||||||
typescript:S5122 |
Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities: Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplenodejs http built-in module: const http = require('http'); const srv = http.createServer((req, res) => { res.writeHead(200, { 'Access-Control-Allow-Origin': '*' }); // Sensitive res.end('ok'); }); srv.listen(3000); Express.js framework with cors middleware: const cors = require('cors'); let app1 = express(); app1.use(cors()); // Sensitive: by default origin is set to * let corsOptions = { origin: '*' // Sensitive }; let app2 = express(); app2.use(cors(corsOptions)); User-controlled origin: function (req, res) { const origin = req.header('Origin'); res.setHeader('Access-Control-Allow-Origin', origin); // Sensitive }; Compliant Solutionnodejs http built-in module: const http = require('http'); const srv = http.createServer((req, res) => { res.writeHead(200, { 'Access-Control-Allow-Origin': 'trustedwebsite.com' }); // Compliant res.end('ok'); }); srv.listen(3000); Express.js framework with cors middleware: const cors = require('cors'); let corsOptions = { origin: 'trustedwebsite.com' // Compliant }; let app = express(); app.use(cors(corsOptions)); User-controlled origin validated with an allow-list: function (req, res) { const origin = req.header('Origin'); if (trustedOrigins.indexOf(origin) >= 0) { res.setHeader('Access-Control-Allow-Origin', origin); } }; See
|
||||||||||||
typescript:S6332 |
Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary. Sensitive Code ExampleFor import { FileSystem } from 'aws-cdk-lib/aws-efs'; new FileSystem(this, 'unencrypted-explicit', { vpc: new Vpc(this, 'VPC'), encrypted: false // Sensitive }); For import { CfnFileSystem } from 'aws-cdk-lib/aws-efs'; new CfnFileSystem(this, 'unencrypted-implicit-cfn', { }); // Sensitive as encryption is disabled by default Compliant SolutionFor import { FileSystem } from 'aws-cdk-lib/aws-efs'; new FileSystem(this, 'encrypted-explicit', { vpc: new Vpc(this, 'VPC'), encrypted: true }); For import { CfnFileSystem } from 'aws-cdk-lib/aws-efs'; new CfnFileSystem(this, 'encrypted-explicit-cfn', { encrypted: true }); See
|
||||||||||||
csharpsquid:S2115 |
When accessing a database, an empty password should be avoided as it introduces a weakness. Why is this an issue?When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials. What is the potential impact?Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains. Unauthorized Access to Sensitive DataWhen a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage. Compromise of System IntegrityWithout a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks. Unwanted Modifications or DeletionsThe absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences. Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm. How to fix it in Entity Framework CoreCode examplesThe following code uses an empty password to connect to a SQL Server database. The vulnerability can be fixed by using Windows authentication (sometimes referred to as integrated security). Noncompliant code exampleprotected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer("Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password="); // Noncompliant } Compliant solutionprotected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer("Server=myServerAddress;Database=myDataBase;Integrated Security=True"); } How does this work?Windows authentication (integrated security)When the connection string includes the It’s important to note that when using integrated security, the user running the application must have the necessary permissions to access the database. Ensure that the user account running the application has the appropriate privileges and is granted access to the database. The syntax employed in connection strings varies by provider:
Note: Some providers such as MySQL do not support Windows authentication with .NET Core. PitfallsHard-coded passwordsIt could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:
To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase. ResourcesStandards |
||||||||||||
csharpsquid:S3329 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV). If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, a company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in .NETCode examplesNoncompliant code exampleusing System.IO; using System.Security.Cryptography; public void Encrypt(byte[] key, byte[] dataToEncrypt, MemoryStream target) { var aes = new AesCryptoServiceProvider(); byte[] iv = new byte[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 }; var encryptor = aes.CreateEncryptor(key, iv); // Noncompliant var cryptoStream = new CryptoStream(target, encryptor, CryptoStreamMode.Write); var swEncrypt = new StreamWriter(cryptoStream); swEncrypt.Write(dataToEncrypt); } Compliant solutionIn this example, the code implicitly uses a number generator that is considered strong, thanks to using System.IO; using System.Security.Cryptography; public void Encrypt(byte[] key, byte[] dataToEncrypt, MemoryStream target) { var aes = new AesCryptoServiceProvider(); var encryptor = aes.CreateEncryptor(key, aes.IV); var cryptoStream = new CryptoStream(target, encryptor, CryptoStreamMode.Write); var swEncrypt = new StreamWriter(cryptoStream); swEncrypt.Write(dataToEncrypt); } How does this work?Use unique IVsTo ensure high security, initialization vectors must meet two important criteria:
The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext. In the previous non-compliant example, the problem is not that the IV is hard-coded. ResourcesStandards
|
||||||||||||
csharpsquid:S4502 |
A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application. The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplepublic void ConfigureServices(IServiceCollection services) { // ... services.AddControllersWithViews(options => options.Filters.Add(new IgnoreAntiforgeryTokenAttribute())); // Sensitive // ... } [HttpPost, IgnoreAntiforgeryToken] // Sensitive public IActionResult ChangeEmail(ChangeEmailModel model) => View("~/Views/..."); Compliant Solutionpublic void ConfigureServices(IServiceCollection services) { // ... services.AddControllersWithViews(options => options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute())); // or services.AddControllersWithViews(options => options.Filters.Add(new ValidateAntiForgeryTokenAttribute())); // ... } [HttpPost] [AutoValidateAntiforgeryToken] public IActionResult ChangeEmail(ChangeEmailModel model) => View("~/Views/..."); See |
||||||||||||
csharpsquid:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on production servers. The .Net Core framework offers multiple features which help during debug.
Use Sensitive Code ExampleThis rule raises issues when the following .Net Core methods are called:
using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; namespace mvcApp { public class Startup2 { public void Configure(IApplicationBuilder app, IHostingEnvironment env) { // Those calls are Sensitive because it seems that they will run in production app.UseDeveloperExceptionPage(); // Sensitive app.UseDatabaseErrorPage(); // Sensitive } } } Compliant Solutionusing Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; namespace mvcApp { public class Startup2 { public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { // The following calls are ok because they are disabled in production app.UseDeveloperExceptionPage(); // Compliant app.UseDatabaseErrorPage(); // Compliant } } } } ExceptionsThis rule does not analyze configuration files. Make sure that debug mode is not enabled by default in those files. See |
||||||||||||
csharpsquid:S5773 |
Deserialization is the process of converting serialized data (such as objects or data structures) back into their original form. Types allowed to be unserialized should be strictly controlled. Why is this an issue?During the deserialization process, the state of an object will be reconstructed from the serialized data stream. By allowing unrestricted deserialization of types, the application makes it possible for attackers to use types with dangerous or otherwise sensitive behavior during the deserialization process. What is the potential impact?When an application deserializes untrusted data without proper restrictions, an attacker can craft malicious serialized objects. Depending on the affected objects and properties, the consequences can vary. Remote Code ExecutionIf attackers can craft malicious serialized objects that contain executable code, this code will run within the application’s context, potentially gaining full control over the system. This can lead to unauthorized access, data breaches, or even complete system compromise. For example, a well-known attack vector consists in serializing an object of type Privilege escalationUnrestricted deserialization can also enable attackers to escalate their privileges within the application. By manipulating the serialized data, an attacker can modify object properties or bypass security checks, granting them elevated privileges that they should not have. This can result in unauthorized access to sensitive data, unauthorized actions, or even administrative control over the application. Denial of ServiceIn some cases, an attacker can abuse the deserialization process to cause a denial of service (DoS) condition. By providing specially crafted serialized data, the attacker can trigger excessive resource consumption, leading to system instability or unresponsiveness. This can disrupt the availability of the application, impacting its functionality and causing inconvenience to users. How to fix itCode examplesNoncompliant code exampleWith var myBinaryFormatter = new BinaryFormatter(); myBinaryFormatter.Deserialize(stream); // Noncompliant With JavaScriptSerializer serializer1 = new JavaScriptSerializer(new SimpleTypeResolver()); // Noncompliant serializer1.Deserialize<ExpectedType>(json); Compliant solutionWith sealed class CustomBinder : SerializationBinder { public override Type BindToType(string assemblyName, string typeName) { if (!(typeName == "type1" || typeName == "type2" || typeName == "type3")) { throw new SerializationException("Only type1, type2 and type3 are allowed"); } return Assembly.Load(assemblyName).GetType(typeName); } } var myBinaryFormatter = new BinaryFormatter(); myBinaryFormatter.Binder = new CustomBinder(); myBinaryFormatter.Deserialize(stream); With public class CustomSafeTypeResolver : JavaScriptTypeResolver { public override Type ResolveType(string id) { if(id != "ExpectedType") { throw new ArgumentNullException("Only ExpectedType is allowed during deserialization"); } return Type.GetType(id); } } JavaScriptSerializer serializer = new JavaScriptSerializer(new CustomSafeTypeResolver()); serializer.Deserialize<ExpectedType>(json); Going the extra mileInstead of using If it’s not possible then try to mitigate the risk by restricting the types allowed to be deserialized:
ResourcesDocumentation
Articles & blog posts
Standards |
||||||||||||
csharpsquid:S4211 |
Transparency attributes in the .NET Framework, designed to protect security-critical operations, can lead to ambiguities and vulnerabilities when declared at different levels such as both for the class and a method. Why is this an issue?Transparency attributes can be declared at several levels. If two different attributes are declared at two different levels, the attribute that
prevails is the one in the highest level. For example, you can declare that a class is What is the potential impact?Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Elevation of PrivilegesAn attacker could potentially exploit conflicting transparency attributes to perform actions with higher privileges than intended. Data ExposureIf a member with conflicting attributes is involved in handling sensitive data, an attacker could exploit the vulnerability to gain unauthorized access to this data. This could lead to breaches of confidentiality and potential data loss. How to fix itCode examplesNoncompliant code exampleusing System; using System.Security; namespace MyLibrary { [SecuritySafeCritical] public class Foo { [SecurityCritical] // Noncompliant public void Bar() { } } } Compliant solutionusing System; using System.Security; namespace MyLibrary { public class Foo { [SecurityCritical] public void Bar() { } } } How does this work?Never set class-level annotationsA class should never have class-level annotations if some functions have different permission levels. Instead, make sure every function has its own correct annotation. If no function needs a particularly distinct security annotation in a class, just set a class-level ResourcesArticles & blog postsStandards |
||||||||||||
csharpsquid:S5547 |
This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in .NETCode examplesThe following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided. Noncompliant code exampleusing System.Security.Cryptography; public void encrypt() { var simpleDES = new DESCryptoServiceProvider(); // Noncompliant } Compliant solutionusing System.Security.Cryptography; public void encrypt() { using (Aes aes = Aes.Create()) { // ... } } How does this work?Use a secure algorithmIt is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES). For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits. ResourcesStandards |
||||||||||||
csharpsquid:S5659 |
This vulnerability allows forging of JSON Web Tokens to impersonate other users. Why is this an issue?JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature. What is the potential impact?When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities. Impersonation of usersJWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data. Unauthorized data accessWhen a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access. How to fix it in Jwt.NetCode examplesThe following code contains an example of JWT decoding without verification of the signature. Noncompliant code exampleusing JWT; public static void decode(IJwtDecoder decoder) { decoder.Decode(token, secret, verify: false); // Noncompliant } using JWT; public static void decode() { var jwt = new JwtBuilder() .WithSecret(secret) .Decode(token); // Noncompliant } Compliant solutionusing JWT; public static void decode(IJwtDecoder decoder) { decoder.Decode(token, secret, verify: true); } When using using JWT; public static void decode() { var jwt = new JwtBuilder() .WithSecret(secret) .MustVerifySignature() .Decode(token); } How does this work?Verify the signature of your tokensResolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose. Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked. To resolve the issue, follow these instructions:
By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process. Going the extra mileSecurely store your secret keysEnsure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services. Rotate your secret keysEven with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions. ResourcesStandards |
||||||||||||
csharpsquid:S4212 |
This rule is deprecated, and will eventually be removed. Why is this an issue?Because serialization constructors allocate and initialize objects, security checks that are present on regular constructors must also be present on a serialization constructor. Failure to do so would allow callers that could not otherwise create an instance to use the serialization constructor to do this. This rule raises an issue when a type implements the Noncompliant code exampleusing System; using System.IO; using System.Runtime.Serialization; using System.Runtime.Serialization.Formatters.Binary; using System.Security; using System.Security.Permissions; [assembly: AllowPartiallyTrustedCallersAttribute()] namespace MyLibrary { [Serializable] public class Foo : ISerializable { private int n; [FileIOPermissionAttribute(SecurityAction.Demand, Unrestricted = true)] public Foo() { n = -1; } protected Foo(SerializationInfo info, StreamingContext context) // Noncompliant { n = (int)info.GetValue("n", typeof(int)); } void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("n", n); } } } Compliant solutionusing System; using System.IO; using System.Runtime.Serialization; using System.Runtime.Serialization.Formatters.Binary; using System.Security; using System.Security.Permissions; [assembly: AllowPartiallyTrustedCallersAttribute()] namespace MyLibrary { [Serializable] public class Foo : ISerializable { private int n; [FileIOPermissionAttribute(SecurityAction.Demand, Unrestricted = true)] public Foo() { n = -1; } [FileIOPermissionAttribute(SecurityAction.Demand, Unrestricted = true)] protected Foo(SerializationInfo info, StreamingContext context) { n = (int)info.GetValue("n", typeof(int)); } void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("n", n); } } } Resources |
||||||||||||
csharpsquid:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in .NETCode examplesNoncompliant code exampleThese samples use TLSv1.0 as the default TLS algorithm, which is cryptographically weak. using System.Net; public void encrypt() { ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls; // Noncompliant } using System.Net.Http; using System.Security.Authentication; public void encrypt() { new HttpClientHandler { SslProtocols = SslProtocols.Tls // Noncompliant }; } Compliant solutionUsing System.Net; public void encrypt() { ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls13; } using System.Net.Http; using System.Security.Authentication; public void encrypt() { new HttpClientHandler { SslProtocols = SslProtocols.Tls12 }; } How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards
|
||||||||||||
csharpsquid:S5542 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext. Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution. For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in .NETCode examplesNoncompliant code exampleExample with a symmetric cipher, AES: using System.Security.Cryptography; public void encrypt() { AesManaged aes = new AesManaged { keysize = 128, blocksize = 128, mode = ciphermode.ecb, // Noncompliant padding = paddingmode.pkcs7 }; } Note that Microsoft has marked derived cryptographic types like Example with an asymmetric cipher, RSA: using System.Security.Cryptography; public void encrypt() { RSACryptoServiceProvider RsaCsp = new RSACryptoServiceProvider(); byte[] encryptedData = RsaCsp.Encrypt(dataToEncrypt, false); // Noncompliant } Compliant solutionFor the AES symmetric cipher, use the GCM mode: using System.Security.Cryptography; public void encrypt() { AesGcm aes = AesGcm(key); } For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP): using System.Security.Cryptography; public void encrypt() { RSACryptoServiceProvider RsaCsp = new RSACryptoServiceProvider(); byte[] encryptedData = RsaCsp.Encrypt(dataToEncrypt, true); } How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. Appropriate choices are currently the following. For AES: use authenticated encryption modesThe best-known authenticated encryption mode for AES is Galois/Counter mode (GCM). GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data. Other similar modes are:
It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead. For RSA: use the OAEP schemeThe Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA. ResourcesArticles & blog posts
Standards |
||||||||||||
csharpsquid:S2245 |
Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities: When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information. As the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplevar random = new Random(); // Sensitive use of Random byte[] data = new byte[16]; random.NextBytes(data); return BitConverter.ToString(data); // Check if this value is used for hashing or encryption Compliant Solutionusing System.Security.Cryptography; ... var randomGenerator = RandomNumberGenerator.Create(); // Compliant for security-sensitive use cases byte[] data = new byte[16]; randomGenerator.GetBytes(data); return BitConverter.ToString(data); See
|
||||||||||||
csharpsquid:S3330 |
When a cookie is configured with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleWhen the HttpCookie myCookie = new HttpCookie("Sensitive cookie"); myCookie.HttpOnly = false; // Sensitive: this cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability The default value of
HttpCookie myCookie = new HttpCookie("Sensitive cookie"); // Sensitive: this cookie is created without the httponly flag (by default set to false) and so it can be stolen easily in case of XSS vulnerability Compliant SolutionSet the HttpCookie myCookie = new HttpCookie("Sensitive cookie"); myCookie.HttpOnly = true; // Compliant: the sensitive cookie is protected against theft thanks to the HttpOnly property set to true (HttpOnly = true) Or change the default flag values for the whole application by editing the Web.config configuration file: <httpCookies httpOnlyCookies="true" requireSSL="true" />
See
|
||||||||||||
csharpsquid:S4426 |
This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms. Note that depending on the algorithm, the term key refers to a different mathematical property. For example:
If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext. In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in .NETCode examplesThe following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm. Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm. Noncompliant code exampleHere is an example of a private key generation with RSA: using System; using System.Security.Cryptography; public void encrypt() { var RsaCsp = new RSACryptoServiceProvider(); // Noncompliant } Here is an example of a key generation with the Digital Signature Algorithm (DSA): using System; using System.Security.Cryptography; public void encrypt() { var DsaCsp = new DSACryptoServiceProvider(); // Noncompliant } Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name: using System; using System.Security.Cryptography; public void encrypt() { ECDsa ecdsa = ECDsa.Create(ECCurve.NamedCurves.brainpoolP160t1); // Noncompliant } Compliant solutionusing System; using System.Security.Cryptography; public void encrypt() { var RsaCsp = new RSACryptoServiceProvider(2048); } using System; using System.Security.Cryptography; public void encrypt() { var Dsa = new DSACng(2048); } using System; using System.Security.Cryptography; public void encrypt() { ECDsa ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256); } How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community. The appropriate choices are the following. RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem. In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible. AES (Advanced Encryption Standard)AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying
all possible keys. Currently, a minimum key size of 128 bits is recommended for AES. Elliptic Curve Cryptography (ECC)Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve
algorithms is mentioned directly in their names. For example, Currently, a minimum key size of 224 bits is recommended for EC-based algorithms. Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:
PitfallsThe KeySize Property is not a setterThe following code is invalid: ---- var RsaCsp = new RSACryptoServiceProvider(); RsaCsp.KeySize = 2048; ---- The KeySize property of CryptoServiceProviders cannot be updated because the setter simply does not exist. This means that this line will not
perform any update on Going the extra milePre-Quantum CryptographyEncrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer. Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety. Resources
Articles & blog posts
Standards
|
||||||||||||
csharpsquid:S5753 |
ASP.NET 1.1+ comes with a feature called Request Validation, preventing the server to accept content containing un-encoded HTML. This feature comes as a first protection layer against Cross-Site Scripting (XSS) attacks and act as a simple Web Application Firewall (WAF) rejecting requests potentially containing malicious content. While this feature is not a silver bullet to prevent all XSS attacks, it helps to catch basic ones. It will for example prevent Note: Request Validation feature being only available for ASP.NET, no Security Hotspot is raised on ASP.NET Core applications. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleAt Controller level: [ValidateInput(false)] public ActionResult Welcome(string name) { ... } At application level, configured in the Web.config file: <configuration> <system.web> <pages validateRequest="false" /> ... <httpRuntime requestValidationMode="0.0" /> </system.web> </configuration> Compliant SolutionAt Controller level: [ValidateInput(true)] public ActionResult Welcome(string name) { ... } or public ActionResult Welcome(string name) { ... } At application level, configured in the Web.config file: <configuration> <system.web> <pages validateRequest="true" /> ... <httpRuntime requestValidationMode="4.5" /> </system.web> </configuration> See
|
||||||||||||
csharpsquid:S5766 |
Deserialization process extracts data from the serialized representation of an object and reconstruct it directly, without calling constructors. Thus, data validation implemented in constructors can be bypassed if serialized objects are controlled by an attacker. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleWhen a [Serializable] public class InternalUrl { private string url; public InternalUrl(string tmpUrl) // Sensitive { if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation { url= "http://localhost/default"; } else { url= tmpUrl; } } } When a class inherit from ISerializable type, has a regular constructor using its parameters in conditions, but doesn’t perform the same validation after deserialization: [Serializable] public class InternalUrl : ISerializable { private string url; public InternalUrl(string tmpUrl) // Sensitive { if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation { url= "http://localhost/default"; } else { url= tmpUrl; } } // special constructor used during deserialization protected InternalUrl(SerializationInfo info, StreamingContext context) // Sensitive { url= (string) info.GetValue("url", typeof(string)); // the same validation as seen in the regular constructor is not performed } void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("url", url); } } When a class inherit from IDeserializationCallback
type, has a constructor using its parameters in conditions but the [Serializable] public class InternalUrl : IDeserializationCallback { private string url; public InternalUrl(string tmpUrl) // Sensitive { if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation { url= "http://localhost/default"; } else { url= tmpUrl; } } void IDeserializationCallback.OnDeserialization(object sender) // Sensitive { // the same validation as seen in the constructor is not performed } } Compliant SolutionWhen using ISerializable
type to control deserialization, perform the same checks inside regular constructors than in the special constructor [Serializable] public class InternalUrl : ISerializable { private string url; public InternalUrl(string tmpUrl) { if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation { url= "http://localhost/default"; } else { url= tmpUrl; } } // special constructor used during deserialization protected InternalUrl(SerializationInfo info, StreamingContext context) { string tmpUrl= (string) info.GetValue("url", typeof(string)); if(!tmpUrl.StartsWith("http://localhost/") { // Compliant url= "http://localhost/default"; } else { url= tmpUrl; } } void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) { info.AddValue("url", url); } } When using IDeserializationCallback
type to control deserialization, perform the same checks inside regular constructors than after deserialization with
[Serializable] public class InternalUrl : IDeserializationCallback { private string url; public InternalUrl(string tmpUrl) { if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation { url= "http://localhost/default"; } else { url= tmpUrl; } } void IDeserializationCallback.OnDeserialization(object sender) // Compliant { if(!url.StartsWith("http://localhost/")) { url= "http://localhost/default"; } else { } } } See
|
||||||||||||
csharpsquid:S2257 |
The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has
been protected. Standard algorithms like This rule tracks custom implementation of these types from
Recommended Secure Coding Practices
Sensitive Code Examplepublic class CustomHash : HashAlgorithm // Noncompliant { private byte[] result; public override void Initialize() => result = null; protected override byte[] HashFinal() => result; protected override void HashCore(byte[] array, int ibStart, int cbSize) => result ??= array.Take(8).ToArray(); } Compliant SolutionSHA256 mySHA256 = SHA256.Create() See
|
||||||||||||
csharpsquid:S4433 |
Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:
A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials. Why is this an issue?When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory. What is the potential impact?An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores. Authentication bypassIf attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider. In such a case, all users configured in the directory might see their identity and privileges taken over. Sensitive information leakIf attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information. Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider. If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law. How to fix itCode examplesThe following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism. Noncompliant code exampleDirectoryEntry myDirectoryEntry = new DirectoryEntry(adPath); myDirectoryEntry.AuthenticationType = AuthenticationTypes.None; // Noncompliant DirectoryEntry myDirectoryEntry = new DirectoryEntry(adPath, "u", "p", AuthenticationTypes.None); // Noncompliant Compliant solutionDirectoryEntry myDirectoryEntry = new DirectoryEntry(myADSPath); // Compliant; default DirectoryEntry.AuthenticationType property value is "Secure" since .NET Framework 2.0 DirectoryEntry myDirectoryEntry = new DirectoryEntry(myADSPath, "u", "p", AuthenticationTypes.Secure); ResourcesDocumentation
Standards |
||||||||||||
csharpsquid:S4790 |
Cryptographic hash algorithms such as Ask Yourself WhetherThe hashed value is used in a security context like:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSafer alternatives, such as Sensitive Code Examplevar hashProvider1 = new MD5CryptoServiceProvider(); // Sensitive var hashProvider2 = (HashAlgorithm)CryptoConfig.CreateFromName("MD5"); // Sensitive var hashProvider3 = new SHA1Managed(); // Sensitive var hashProvider4 = HashAlgorithm.Create("SHA1"); // Sensitive Compliant Solutionvar hashProvider1 = new SHA512Managed(); // Compliant var hashProvider2 = (HashAlgorithm)CryptoConfig.CreateFromName("SHA512Managed"); // Compliant var hashProvider3 = HashAlgorithm.Create("SHA512Managed"); // Compliant See
|
||||||||||||
csharpsquid:S4792 |
This rule is deprecated, and will eventually be removed. Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities: Logs are useful before, during and after a security incident.
Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged. This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:
Sensitive Code Example.Net Core: configure programmatically using System; using System.Collections; using System.Collections.Generic; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Logging; using Microsoft.Extensions.Options; using Microsoft.AspNetCore; namespace MvcApp { public class ProgramLogging { public static IWebHostBuilder CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .ConfigureLogging((hostingContext, logging) => // Sensitive { // ... }) .UseStartup<StartupLogging>(); } public class StartupLogging { public void ConfigureServices(IServiceCollection services) { services.AddLogging(logging => // Sensitive { // ... }); } public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { IConfiguration config = null; LogLevel level = LogLevel.Critical; Boolean includeScopes = false; Func<string,Microsoft.Extensions.Logging.LogLevel,bool> filter = null; Microsoft.Extensions.Logging.Console.IConsoleLoggerSettings consoleSettings = null; Microsoft.Extensions.Logging.AzureAppServices.AzureAppServicesDiagnosticsSettings azureSettings = null; Microsoft.Extensions.Logging.EventLog.EventLogSettings eventLogSettings = null; // An issue will be raised for each call to an ILoggerFactory extension methods adding loggers. loggerFactory.AddAzureWebAppDiagnostics(); // Sensitive loggerFactory.AddAzureWebAppDiagnostics(azureSettings); // Sensitive loggerFactory.AddConsole(); // Sensitive loggerFactory.AddConsole(level); // Sensitive loggerFactory.AddConsole(level, includeScopes); // Sensitive loggerFactory.AddConsole(filter); // Sensitive loggerFactory.AddConsole(filter, includeScopes); // Sensitive loggerFactory.AddConsole(config); // Sensitive loggerFactory.AddConsole(consoleSettings); // Sensitive loggerFactory.AddDebug(); // Sensitive loggerFactory.AddDebug(level); // Sensitive loggerFactory.AddDebug(filter); // Sensitive loggerFactory.AddEventLog(); // Sensitive loggerFactory.AddEventLog(eventLogSettings); // Sensitive loggerFactory.AddEventLog(level); // Sensitive loggerFactory.AddEventSourceLogger(); // Sensitive IEnumerable<ILoggerProvider> providers = null; LoggerFilterOptions filterOptions1 = null; IOptionsMonitor<LoggerFilterOptions> filterOptions2 = null; LoggerFactory factory = new LoggerFactory(); // Sensitive new LoggerFactory(providers); // Sensitive new LoggerFactory(providers, filterOptions1); // Sensitive new LoggerFactory(providers, filterOptions2); // Sensitive } } } Log4Net using System; using System.IO; using System.Xml; using log4net.Appender; using log4net.Config; using log4net.Repository; namespace Logging { class Log4netLogging { void Foo(ILoggerRepository repository, XmlElement element, FileInfo configFile, Uri configUri, Stream configStream, IAppender appender, params IAppender[] appenders) { log4net.Config.XmlConfigurator.Configure(repository); // Sensitive log4net.Config.XmlConfigurator.Configure(repository, element); // Sensitive log4net.Config.XmlConfigurator.Configure(repository, configFile); // Sensitive log4net.Config.XmlConfigurator.Configure(repository, configUri); // Sensitive log4net.Config.XmlConfigurator.Configure(repository, configStream); // Sensitive log4net.Config.XmlConfigurator.ConfigureAndWatch(repository, configFile); // Sensitive log4net.Config.DOMConfigurator.Configure(); // Sensitive log4net.Config.DOMConfigurator.Configure(repository); // Sensitive log4net.Config.DOMConfigurator.Configure(element); // Sensitive log4net.Config.DOMConfigurator.Configure(repository, element); // Sensitive log4net.Config.DOMConfigurator.Configure(configFile); // Sensitive log4net.Config.DOMConfigurator.Configure(repository, configFile); // Sensitive log4net.Config.DOMConfigurator.Configure(configStream); // Sensitive log4net.Config.DOMConfigurator.Configure(repository, configStream); // Sensitive log4net.Config.DOMConfigurator.ConfigureAndWatch(configFile); // Sensitive log4net.Config.DOMConfigurator.ConfigureAndWatch(repository, configFile); // Sensitive log4net.Config.BasicConfigurator.Configure(); // Sensitive log4net.Config.BasicConfigurator.Configure(appender); // Sensitive log4net.Config.BasicConfigurator.Configure(appenders); // Sensitive log4net.Config.BasicConfigurator.Configure(repository); // Sensitive log4net.Config.BasicConfigurator.Configure(repository, appender); // Sensitive log4net.Config.BasicConfigurator.Configure(repository, appenders); // Sensitive } } } NLog: configure programmatically namespace Logging { class NLogLogging { void Foo(NLog.Config.LoggingConfiguration config) { NLog.LogManager.Configuration = config; // Sensitive, this changes the logging configuration. } } } Serilog namespace Logging { class SerilogLogging { void Foo() { new Serilog.LoggerConfiguration(); // Sensitive } } } See
|
||||||||||||
csharpsquid:S2755 |
This vulnerability allows the usage of external entities in XML. Why is this an issue?External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack. What is the potential impact?Exposing sensitive dataOne significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information. Exhausting system resourcesAnother consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience. Forging requestsXXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure. How to fix it in .NETCode examplesThe following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed. Noncompliant code exampleusing System.Xml; public static void decode() { XmlDocument parser = new XmlDocument(); parser.XmlResolver = new XmlUrlResolver(); // Noncompliant parser.LoadXml("xxe.xml"); } Compliant solution
using System.Xml; public static void decode() { XmlDocument parser = new XmlDocument(); parser.XmlResolver = null; parser.LoadXml("xxe.xml"); } How does this work?Disable external entitiesThe most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework. If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved
during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are
processed. ResourcesStandards |
||||||||||||
csharpsquid:S2612 |
In Unix, "others" class refers to all users except the owner of the file and the members of the group assigned to this file. In Windows, "Everyone" group is similar and includes all members of the Authenticated Users group as well as the built-in Guest account, and several other built-in security accounts. Granting permissions to these groups can lead to unintended access to files. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesThe most restrictive possible permissions should be assigned to files and directories. Sensitive Code Example.Net Framework: var unsafeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Allow); var fileSecurity = File.GetAccessControl("path"); fileSecurity.AddAccessRule(unsafeAccessRule); // Sensitive fileSecurity.SetAccessRule(unsafeAccessRule); // Sensitive File.SetAccessControl("fileName", fileSecurity); .Net / .Net Core var fileInfo = new FileInfo("path"); var fileSecurity = fileInfo.GetAccessControl(); fileSecurity.AddAccessRule(new FileSystemAccessRule("Everyone", FileSystemRights.Write, AccessControlType.Allow)); // Sensitive fileInfo.SetAccessControl(fileSecurity); .Net / .Net Core using Mono.Posix.NETStandard var fileSystemEntry = UnixFileSystemInfo.GetFileSystemEntry("path"); fileSystemEntry.FileAccessPermissions = FileAccessPermissions.OtherReadWriteExecute; // Sensitive Compliant Solution.Net Framework var safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny); var fileSecurity = File.GetAccessControl("path"); fileSecurity.AddAccessRule(safeAccessRule); File.SetAccessControl("path", fileSecurity); .Net / .Net Core var safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny); var fileInfo = new FileInfo("path"); var fileSecurity = fileInfo.GetAccessControl(); fileSecurity.SetAccessRule(safeAccessRule); fileInfo.SetAccessControl(fileSecurity); .Net / .Net Core using Mono.Posix.NETStandard var fs = UnixFileSystemInfo.GetFileSystemEntry("path"); fs.FileAccessPermissions = FileAccessPermissions.UserExecute; See
|
||||||||||||
csharpsquid:S3884 |
This rule is deprecated, and will eventually be removed. Why is this an issue?
Specifically, these methods are meant to be called from non-managed code such as a C++ wrapper that then invokes the managed, i.e. C# or VB.NET, code. Noncompliant code example[DllImport("ole32.dll")] static extern int CoSetProxyBlanket([MarshalAs(UnmanagedType.IUnknown)]object pProxy, uint dwAuthnSvc, uint dwAuthzSvc, [MarshalAs(UnmanagedType.LPWStr)] string pServerPrincName, uint dwAuthnLevel, uint dwImpLevel, IntPtr pAuthInfo, uint dwCapabilities); public enum RpcAuthnLevel { Default = 0, None = 1, Connect = 2, Call = 3, Pkt = 4, PktIntegrity = 5, PktPrivacy = 6 } public enum RpcImpLevel { Default = 0, Anonymous = 1, Identify = 2, Impersonate = 3, Delegate = 4 } public enum EoAuthnCap { None = 0x00, MutualAuth = 0x01, StaticCloaking = 0x20, DynamicCloaking = 0x40, AnyAuthority = 0x80, MakeFullSIC = 0x100, Default = 0x800, SecureRefs = 0x02, AccessControl = 0x04, AppID = 0x08, Dynamic = 0x10, RequireFullSIC = 0x200, AutoImpersonate = 0x400, NoCustomMarshal = 0x2000, DisableAAA = 0x1000 } [DllImport("ole32.dll")] public static extern int CoInitializeSecurity(IntPtr pVoid, int cAuthSvc, IntPtr asAuthSvc, IntPtr pReserved1, RpcAuthnLevel level, RpcImpLevel impers, IntPtr pAuthList, EoAuthnCap dwCapabilities, IntPtr pReserved3); static void Main(string[] args) { var hres1 = CoSetProxyBlanket(null, 0, 0, null, 0, 0, IntPtr.Zero, 0); // Noncompliant var hres2 = CoInitializeSecurity(IntPtr.Zero, -1, IntPtr.Zero, IntPtr.Zero, RpcAuthnLevel.None, RpcImpLevel.Impersonate, IntPtr.Zero, EoAuthnCap.None, IntPtr.Zero); // Noncompliant } Resources |
||||||||||||
csharpsquid:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code Examplevar ip = "192.168.12.42"; var address = IPAddress.Parse(ip); Compliant Solutionvar ip = ConfigurationManager.AppSettings["myapplication.ip"]; var address = IPAddress.Parse(ip); ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See |
||||||||||||
csharpsquid:S4830 |
This vulnerability makes it possible that an encrypted communication is intercepted. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be. When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. Loss of data integrityWhen TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system. How to fix it in .NETCode examplesIn the following example, the callback change impacts the entirety of HTTP requests made by the application. The certificate validation gets disabled by overriding Noncompliant code exampleusing System.Net; using System.Net.Http; public static void connect() { ServicePointManager.ServerCertificateValidationCallback += (sender, certificate, chain, errors) => { return true; // Noncompliant }; HttpClient httpClient = new HttpClient(); HttpResponseMessage response = httpClient.GetAsync("https://example.com").Result; } How does this work?Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation. To avoid running into problems with invalid certificates, consider the following sections. Using trusted certificatesIf possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration. Working with self-signed certificates or non-standard CAsIn some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store. ResourcesStandards
|
||||||||||||
csharpsquid:S5042 |
Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes). Ask Yourself WhetherArchives to expand are untrusted and:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleusing var zipToOpen = new FileStream(@"ZipBomb.zip", FileMode.Open); using var archive = new ZipArchive(zipToOpen, ZipArchiveMode.Read); foreach (ZipArchiveEntry entry in archive.Entries) { entry.ExtractToFile("./output_onlyfortesting.txt", true); // Sensitive } Compliant Solutionint THRESHOLD_ENTRIES = 10000; int THRESHOLD_SIZE = 1000000000; // 1 GB double THRESHOLD_RATIO = 10; int totalSizeArchive = 0; int totalEntryArchive = 0; using var zipToOpen = new FileStream(@"ZipBomb.zip", FileMode.Open); using var archive = new ZipArchive(zipToOpen, ZipArchiveMode.Read); foreach (ZipArchiveEntry entry in archive.Entries) { totalEntryArchive ++; using (Stream st = entry.Open()) { byte[] buffer = new byte[1024]; int totalSizeEntry = 0; int numBytesRead = 0; do { numBytesRead = st.Read(buffer, 0, 1024); totalSizeEntry += numBytesRead; totalSizeArchive += numBytesRead; double compressionRatio = totalSizeEntry / entry.CompressedLength; if(compressionRatio > THRESHOLD_RATIO) { // ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack break; } } while (numBytesRead > 0); } if(totalSizeArchive > THRESHOLD_SIZE) { // the uncompressed data size is too much for the application resource capacity break; } if(totalEntryArchive > THRESHOLD_ENTRIES) { // too much entries in this archive, can lead to inodes exhaustion of the system break; } } See
|
||||||||||||
csharpsquid:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplestring username = "admin"; string password = "Admin123"; // Sensitive string usernamePassword = "user=admin&password=Admin123"; // Sensitive string url = "scheme://user:Admin123@domain.com"; // Sensitive Compliant Solutionstring username = "admin"; string password = GetEncryptedPassword(); string usernamePassword = string.Format("user={0}&password={1}", GetEncryptedUsername(), GetEncryptedPassword()); string url = $"scheme://{username}:{password}@domain.com"; string url2 = "http://guest:guest@domain.com"; // Compliant const string Password_Property = "custom.password"; // Compliant Exceptions
See
|
||||||||||||
csharpsquid:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code Examplevar urlHttp = "http://example.com"; // Noncompliant var urlFtp = "ftp://anonymous@example.com"; // Noncompliant var urlTelnet = "telnet://anonymous@example.com"; // Noncompliant using var smtp = new SmtpClient("host", 25); // Noncompliant, EnableSsl is not set using var telnet = new MyTelnet.Client("host", port); // Noncompliant, rule raises Security Hotspot on any member containing "Telnet" Compliant Solutionvar urlHttps = "https://example.com"; var urlSftp = "sftp://anonymous@example.com"; var urlSsh = "ssh://anonymous@example.com"; using var smtp = new SmtpClient("host", 25) { EnableSsl = true }; using var ssh = new MySsh.Client("host", port); ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
csharpsquid:S5693 |
Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to customize the rule with the limit values that correspond to the web application. Sensitive Code Exampleusing Microsoft.AspNetCore.Mvc; public class MyController : Controller { [HttpPost] [DisableRequestSizeLimit] // Sensitive: No size limit [RequestSizeLimit(10485760)] // Sensitive: 10485760 B = 10240 KB = 10 MB is more than the recommended limit of 8MB public IActionResult PostRequest(Model model) { // ... } [HttpPost] [RequestFormLimits(MultipartBodyLengthLimit = 10485760)] // Sensitive: 10485760 B = 10240 KB = 10 MB is more than the recommended limit of 8MB public IActionResult MultipartFormRequest(Model model) { // ... } } In Web.config: <configuration> <system.web> <httpRuntime maxRequestLength="81920" executionTimeout="3600" /> <!-- Sensitive: maxRequestLength is expressed in KB, so 81920 KB = 80 MB --> </system.web> <system.webServer> <security> <requestFiltering> <requestLimits maxAllowedContentLength="83886080" /> <!-- Sensitive: maxAllowedContentLength is expressed in bytes, so 83886080 B = 81920 KB = 80 MB --> </requestFiltering> </security> </system.webServer> </configuration> Compliant Solutionusing Microsoft.AspNetCore.Mvc; public class MyController : Controller { [HttpPost] [RequestSizeLimit(8388608)] // Compliant: 8388608 B = 8192 KB = 8 MB public IActionResult PostRequest(Model model) { // ... } [HttpPost] [RequestFormLimits(MultipartBodyLengthLimit = 8388608)] // Compliant: 8388608 B = 8192 KB = 8 MB public IActionResult MultipartFormRequest(Model model) { // ... } } In Web.config: <configuration> <system.web> <httpRuntime maxRequestLength="8192" executionTimeout="3600" /> <!-- Compliant: maxRequestLength is expressed in KB, so 8192 KB = 8 MB --> </system.web> <system.webServer> <security> <requestFiltering> <requestLimits maxAllowedContentLength="8388608" /> <!-- Compliant: maxAllowedContentLength is expressed in bytes, so 8388608 B = 8192 KB = 8 MB --> </requestFiltering> </security> </system.webServer> </configuration> See
|
||||||||||||
csharpsquid:S2077 |
Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplepublic void Foo(DbContext context, string query, string param) { string sensitiveQuery = string.Concat(query, param); context.Database.ExecuteSqlCommand(sensitiveQuery); // Sensitive context.Query<User>().FromSql(sensitiveQuery); // Sensitive context.Database.ExecuteSqlCommand($"SELECT * FROM mytable WHERE mycol={value}", param); // Sensitive, the FormattableString is evaluated and converted to RawSqlString string query = $"SELECT * FROM mytable WHERE mycol={param}"; context.Database.ExecuteSqlCommand(query); // Sensitive, the FormattableString has already been evaluated, it won't be converted to a parametrized query. } public void Bar(SqlConnection connection, string param) { SqlCommand command; string sensitiveQuery = string.Format("INSERT INTO Users (name) VALUES (\"{0}\")", param); command = new SqlCommand(sensitiveQuery); // Sensitive command.CommandText = sensitiveQuery; // Sensitive SqlDataAdapter adapter; adapter = new SqlDataAdapter(sensitiveQuery, connection); // Sensitive } Compliant Solutionpublic void Foo(DbContext context, string query, string param) { context.Database.ExecuteSqlCommand("SELECT * FROM mytable WHERE mycol=@p0", param); // Compliant, it's a parametrized safe query } See
|
||||||||||||
csharpsquid:S6640 |
Using
Ask Yourself Whether
There is a risk if you answered yes to the question. Recommended Secure Coding PracticesUnless absolutely necessary, do not use If it is not possible to remove the code block, then it should be kept as short as possible. Doing so reduces risk, as there is less code that can
potentially introduce new bugs. Within the
Sensitive Code Examplepublic unsafe int SubarraySum(int[] array, int start, int end) // Sensitive { var sum = 0; // Skip array bound checks for extra performance fixed (int* firstNumber = array) { for (int i = start; i < end; i++) sum += *(firstNumber + i); } return sum; } Compliant Solutionpublic int SubarraySum(int[] array, int start, int end) { var sum = 0; Span<int> span = array.AsSpan(); for (int i = start; i < end; i++) sum += span[i]; return sum; } See
|
||||||||||||
csharpsquid:S2053 |
This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes. Why is this an issue?During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords. However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital. What is the potential impact?Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need. Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster. If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once. A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before. With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred. ExceptionsTo securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:
When they are used for password storage, using a secure, random salt is required. However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted. How to fix it in .NETCode examplesThe following code contains examples of hard-coded salts. Noncompliant code exampleusing System.Security.Cryptography; public static void hash(string password) { var salt = Encoding.UTF8.GetBytes("salty"); var hashed = new Rfc2898DeriveBytes(password, salt); // Noncompliant } Compliant solutionusing System.Security.Cryptography; public static void hash(string password) { var hashed = new Rfc2898DeriveBytes(password, 16); } How does this work?This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards. In the case of the code sample, the class automatically takes care of generating a secure salt if none is specified. ResourcesStandards |
||||||||||||
csharpsquid:S5443 |
Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like
In the past, it has led to the following vulnerabilities: This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesOut of the box, .NET is missing secure-by-design APIs to create temporary files. To overcome this, one of the following options can be used:
Sensitive Code Exampleusing var writer = new StreamWriter("/tmp/f"); // Sensitive var tmp = Environment.GetEnvironmentVariable("TMP"); // Sensitive Compliant Solutionvar randomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName()); // Creates a new file with write, non inheritable permissions which is deleted on close. using var fileStream = new FileStream(randomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose); using var writer = new StreamWriter(fileStream); See
|
||||||||||||
csharpsquid:S5445 |
Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic. Why is this an issue?Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it. In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues. What is the potential impact?Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it. Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise. Information disclosureBecause attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive. For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements. Attack surface extensionAn application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise. For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over. How to fix itCode examplesThe following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function. Noncompliant code exampleusing System.IO; public void Example() { var tempPath = Path.GetTempFileName(); // Noncompliant using (var writer = new StreamWriter(tempPath)) { writer.WriteLine("content"); } } Compliant solutionusing System.IO; public void Example() { var randomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName()); using (var fileStream = new FileStream(randomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose)) using (var writer = new StreamWriter(fileStream)) { writer.WriteLine("content"); } } How does this work?Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks. Strong security controlsTemporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose. In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:
Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them. Here the example compliant code uses the ResourcesDocumentation
Standards |
||||||||||||
csharpsquid:S6444 |
Not specifying a timeout for regular expressions can lead to a Denial-of-Service attack. Pass a timeout when using
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplepublic void RegexPattern(string input) { var emailPattern = new Regex(".+@.+", RegexOptions.None); var isNumber = Regex.IsMatch(input, "[0-9]+"); var isLetterA = Regex.IsMatch(input, "(a+)+"); } Compliant Solutionpublic void RegexPattern(string input) { var emailPattern = new Regex(".+@.+", RegexOptions.None, TimeSpan.FromMilliseconds(100)); var isNumber = Regex.IsMatch(input, "[0-9]+", RegexOptions.None, TimeSpan.FromMilliseconds(100)); var isLetterA = Regex.IsMatch(input, "(a+)+", RegexOptions.NonBacktracking); // .Net 7 and above AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT", TimeSpan.FromMilliseconds(100)); // process-wide setting } See
|
||||||||||||
csharpsquid:S4036 |
When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesFully qualified/absolute path should be used to specify the OS command to execute. Sensitive Code ExampleProcess p = new Process(); p.StartInfo.FileName = "binary"; // Sensitive Compliant SolutionProcess p = new Process(); p.StartInfo.FileName = @"C:\Apps\binary.exe"; // Compliant See |
||||||||||||
csharpsquid:S5122 |
Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities: Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleASP.NET Core MVC: [HttpGet] public string Get() { Response.Headers.Add("Access-Control-Allow-Origin", "*"); // Sensitive Response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "*"); // Sensitive } public void ConfigureServices(IServiceCollection services) { services.AddCors(options => { options.AddDefaultPolicy(builder => { builder.WithOrigins("*"); // Sensitive }); options.AddPolicy(name: "EnableAllPolicy", builder => { builder.WithOrigins("*"); // Sensitive }); options.AddPolicy(name: "OtherPolicy", builder => { builder.AllowAnyOrigin(); // Sensitive }); }); services.AddControllers(); } ASP.NET MVC: public class HomeController : ApiController { public HttpResponseMessage Get() { var response = HttpContext.Current.Response; response.Headers.Add("Access-Control-Allow-Origin", "*"); // Sensitive response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "*"); // Sensitive response.AppendHeader(HeaderNames.AccessControlAllowOrigin, "*"); // Sensitive } } [EnableCors(origins: "*", headers: "*", methods: "GET")] // Sensitive public HttpResponseMessage Get() => new HttpResponseMessage() { Content = new StringContent("content") }; User-controlled origin: String origin = Request.Headers["Origin"]; Response.Headers.Add("Access-Control-Allow-Origin", origin); // Sensitive Compliant SolutionASP.NET Core MVC: [HttpGet] public string Get() { Response.Headers.Add("Access-Control-Allow-Origin", "https://trustedwebsite.com"); // Safe Response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "https://trustedwebsite.com"); // Safe } public void ConfigureServices(IServiceCollection services) { services.AddCors(options => { options.AddDefaultPolicy(builder => { builder.WithOrigins("https://trustedwebsite.com", "https://anothertrustedwebsite.com"); // Safe }); options.AddPolicy(name: "EnableAllPolicy", builder => { builder.WithOrigins("https://trustedwebsite.com"); // Safe }); }); services.AddControllers(); } ASP.Net MVC: public class HomeController : ApiController { public HttpResponseMessage Get() { var response = HttpContext.Current.Response; response.Headers.Add("Access-Control-Allow-Origin", "https://trustedwebsite.com"); response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "https://trustedwebsite.com"); response.AppendHeader(HeaderNames.AccessControlAllowOrigin, "https://trustedwebsite.com"); } } [EnableCors(origins: "https://trustedwebsite.com", headers: "*", methods: "GET")] public HttpResponseMessage Get() => new HttpResponseMessage() { Content = new StringContent("content") }; User-controlled origin validated with an allow-list: String origin = Request.Headers["Origin"]; if (trustedOrigins.Contains(origin)) { Response.Headers.Add("Access-Control-Allow-Origin", origin); } See
|
||||||||||||
csharpsquid:S2092 |
When a cookie is protected with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleWhen the HttpCookie myCookie = new HttpCookie("Sensitive cookie"); myCookie.Secure = false; // Sensitive: a security-sensitive cookie is created with the secure flag set to false The default value of
HttpCookie myCookie = new HttpCookie("Sensitive cookie"); // Sensitive: a security-sensitive cookie is created with the secure flag not defined (by default set to false) Compliant SolutionSet the HttpCookie myCookie = new HttpCookie("Sensitive cookie"); myCookie.Secure = true; // Compliant Or change the default flag values for the whole application by editing the Web.config configuration file: <httpCookies httpOnlyCookies="true" requireSSL="true" />
See
|
||||||||||||
xml:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleSpring-social-twitter secrets can be stored inside a xml file: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="connectionFactoryLocator" class="org.springframework.social.connect.support.ConnectionFactoryRegistry"> <property name="connectionFactories"> <list> <bean class="org.springframework.social.twitter.connect.TwitterConnectionFactory"> <constructor-arg value="username" /> <constructor-arg value="very-secret-password" /> <!-- Sensitive --> </bean> </list> </property> </bean> </beans> Compliant SolutionIn spring social twitter, retrieve secrets from environment variables: @Configuration public class SocialConfig implements SocialConfigurer { @Override public void addConnectionFactories(ConnectionFactoryConfigurer cfConfig, Environment env) { cfConfig.addConnectionFactory(new TwitterConnectionFactory( env.getProperty("twitter.consumerKey"), env.getProperty("twitter.consumerSecret"))); <!-- Compliant --> } } See
|
||||||||||||
xml:S3355 |
Why is this an issue?Every filter defined in Noncompliant code example<filter> <filter-name>DefinedNotUsed</filter-name> <filter-class>com.myco.servlet.ValidationFilter</filter-class> </filter> Compliant solution<filter> <filter-name>ValidationFilter</filter-name> <filter-class>com.myco.servlet.ValidationFilter</filter-class> </filter> <filter-mapping> <filter-name>ValidationFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Resources
|
||||||||||||
xml:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code Example<application android:usesCleartextTraffic="true"> <!-- Sensitive --> </application> For versions older than Android 9 (API level 28) <application> <!-- Sensitive --> </application> Compliant Solution<application android:usesCleartextTraffic="false"> </application> See
|
||||||||||||
xml:S2647 |
Why is this an issue?Basic authentication’s only means of obfuscation is Base64 encoding. Since Base64 encoding is easily recognized and reversed, it offers only the thinnest veil of protection to your users, and should not be used. Noncompliant code example// in web.xml <web-app ...> <!-- ... --> <login-config> <auth-method>BASIC</auth-method> </login-config> </web-app> ExceptionsThe rule will not raise any issue if HTTPS is enabled, on any URL-pattern. <web-app ...> <!-- ... --> <security-constraint> <web-resource-collection> <web-resource-name>HTTPS enabled</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> Resources
|
||||||||||||
xml:S3330 |
When a cookie is configured with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example<session-config> <cookie-config> <http-only>false</http-only> <!-- Sensitive --> </cookie-config> </session-config> <session-config> <cookie-config> <!-- Sensitive: http-only tag is missing defaulting to false --> </cookie-config> </session-config> Compliant Solution<session-config> <cookie-config> <http-only>true</http-only> <!-- Compliant --> </cookie-config> </session-config> See
|
||||||||||||
xml:S3374 |
Why is this an issue?According to the Common Weakness Enumeration,
In such a case, it is likely that the two forms should be combined. At the very least, one should be removed. Noncompliant code example<form-validation> <formset> <form name="BookForm"> ... </form> <form name="BookForm"> ... </form> <!-- Noncompliant --> </formset> </form-validation> Compliant solution<form-validation> <formset> <form name="BookForm"> ... </form> </formset> </form-validation> Resources
|
||||||||||||
xml:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Activating a development feature in production can have an important range of consequences depending on its use:
In all cases, the attack surface of an affected application is increased. In some cases, such features can also make the exploitation of other unrelated vulnerabilities easier. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesApplications should be released without any development feature activated. When such features are required when in the development process of the application, they should only apply to a build variant that is dedicated to development environments. That variant should not be set as the default build configuration to prevent any unattended development feature exposition. Sensitive Code ExampleIn <application android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:debuggable="true" android:theme="@style/AppTheme"> </application> <!-- Sensitive --> In a <configuration> <system.web> <customErrors mode="Off" /> <!-- Sensitive --> </system.web> </configuration> Compliant SolutionIn <application android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:debuggable="false" android:theme="@style/AppTheme"> </application> <!-- Compliant --> In a <configuration> <system.web> <customErrors mode="On" /> <!-- Compliant --> </system.web> </configuration> See
|
||||||||||||
xml:S5322 |
Android applications can receive broadcasts from the system or other applications. Receiving intents is security-sensitive. For example, it has led in the past to the following vulnerabilities: Receivers can be declared in the manifest or in the code to make them context-specific. If the receiver is declared in the manifest Android will start the application if it is not already running once a matching broadcast is received. The receiver is an entry point into the application. Other applications can send potentially malicious broadcasts, so it is important to consider broadcasts as untrusted and to limit the applications that can send broadcasts to the receiver. Permissions can be specified to restrict broadcasts to authorized applications. Restrictions can be enforced by both the sender and receiver of a broadcast. If permissions are specified when registering a broadcast receiver, then only broadcasters who were granted this permission can send a message to the receiver. This rule raises an issue when a receiver is registered without specifying any broadcast permission. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesRestrict the access to broadcasted intents. See the Android documentation for more information. Sensitive Code Example<receiver android:name=".MyBroadcastReceiver" android:exported="true"> <!-- Sensitive --> <intent-filter> <action android:name="android.intent.action.AIRPLANE_MODE"/> </intent-filter> </receiver> Compliant SolutionEnforce permissions: <receiver android:name=".MyBroadcastReceiver" android:permission="android.permission.SEND_SMS" android:exported="true"> <intent-filter> <action android:name="android.intent.action.AIRPLANE_MODE"/> </intent-filter> </receiver> Do not export the receiver and only receive system intents: <receiver android:name=".MyBroadcastReceiver" android:exported="false"> <intent-filter> <action android:name="android.intent.action.AIRPLANE_MODE"/> </intent-filter> </receiver> See
|
||||||||||||
xml:S5594 |
Why is this an issue?Once an Android component has been exported, it can be used by attackers to launch malicious actions and might also give access to other components that are not exported. As a result, sensitive user data can be stolen, and components can be launched unexpectedly. For this reason, the following components should be protected:
To do so, it is recommended to either set Warning: When targeting Android versions lower than 12, the presence of intent filters will cause If a component must be exported, use a Noncompliant code exampleThe following components are vulnerable because permissions are undefined or partially defined: <provider android:authorities="com.example.app.Provider" android:name="com.example.app.Provider" android:exported="true" android:readPermission="com.example.app.READ_PERMISSION" /> <!-- Noncompliant: write permission is not defined --> <provider android:authorities="com.example.app.Provider" android:name="com.example.app.Provider" android:exported="true" android:writePermission="com.example.app.WRITE_PERMISSION" /> <!-- Noncompliant: read permission is not defined --> <activity android:name="com.example.activity.Activity"> <!-- Noncompliant: permissions are not defined --> <intent-filter> <action android:name="com.example.OPEN_UI"/> <category android:name="android.intent.category.DEFAULT"/> </intent-filter> </activity> Compliant solutionIf the component’s capabilities or data are not intended to be shared with other apps, its <provider android:authorities="com.example.app.Provider" android:name="com.example.app.Provider" android:exported="false" /> Otherwise, implement permissions: <provider android:authorities="com.example.app.Provider" android:name="com.example.app.Provider" android:exported="true" android:readPermission="com.example.app.READ_PERMISSION" android:writePermission="com.example.app.WRITE_PERMISSION" /> <activity android:name="com.example.activity.Activity" android:permission="com.example.app.PERMISSION" > <intent-filter> <action android:name="com.example.OPEN_UI"/> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity> Resources
|
||||||||||||
xml:S5604 |
Permissions that can have a large impact on user privacy, marked as dangerous or "not for use by third-party applications" by Android, should be requested only if they are really necessary to implement critical features of an application. Ask Yourself Whether
You are at risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to carefully review all the permissions and to use Sensitive Code ExampleIn AndroidManifest.xml: <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <!-- Sensitive --> <uses-permission android:name="android.permission.ACCESS_MEDIA_LOCATION" /> <!-- Sensitive --> Compliant Solution<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <!-- Compliant --> See
|
||||||||||||
xml:S6358 |
Android has a built-in backup mechanism that can save and restore application data. When application backup is enabled, local data from your
application can be exported to Google Cloud or to an external device via By default application backup is enabled and it includes:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example<application android:allowBackup="true"> <!-- Sensitive --> </application> Compliant SolutionDisable application backup. <application android:allowBackup="false"> </application> If targeting Android 6.0 or above (API level 23), define files to include/exclude from the application backup. <application android:allowBackup="true" android:fullBackupContent="@xml/backup.xml"> </application> See
|
||||||||||||
xml:S6359 |
Why is this an issue?Defining a custom permission in the Noncompliant code example<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.organization.app"> <permission android:name="android.permission.MYPERMISSION" /> <!-- Noncompliant --> </manifest> Compliant solution<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.organization.app"> <permission android:name="com.organization.app.permission.MYPERMISSION" /> </manifest> Resources
|
||||||||||||
xml:S6361 |
Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding Practices
Sensitive Code Example<provider android:authorities="com.example.app.Provider" android:name="com.example.app.Provider" android:permission="com.example.app.PERMISSION" <!-- Sensitive --> android:exported="true"/> <provider android:authorities="com.example.app.Provider" android:name="com.example.app.Provider" android:readPermission="com.example.app.PERMISSION" <!-- Sensitive --> android:writePermission="com.example.app.PERMISSION" <!-- Sensitive --> android:exported="true"/> Compliant Solution<provider android:authorities="com.example.app.MyProvider" android:name="com.example.app.MyProvider" android:readPermission="com.example.app.READ_PERMISSION" android:writePermission="com.example.app.WRITE_PERMISSION" android:exported="true"/> See
|
||||||||||||
xml:S3281 |
Why is this an issue?Default interceptors, such as application security interceptors, must be listed in the This rule applies to projects that contain JEE Beans (any one of Noncompliant code example// file: ejb-interceptors.xml <assembly-descriptor> <interceptor-binding> <!-- should be declared in ejb-jar.xml --> <ejb-name>*</ejb-name> <interceptor-class>com.myco.ImportantInterceptor</interceptor-class> <!-- Noncompliant; will NOT be treated as default --> </interceptor-binding> </assembly-descriptor> Compliant solution// file: ejb-jar.xml <assembly-descriptor> <interceptor-binding> <ejb-name>*</ejb-name> <interceptor-class>com.myco.ImportantInterceptor</interceptor-class> </interceptor-binding> </assembly-descriptor> Resources
|
||||||||||||
xml:S5122 |
Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities: Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Example<!-- Tomcat 7+ Cors Filter --> <filter> <filter-name>CorsFilter</filter-name> <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> <init-param> <param-name>cors.allowed.origins</param-name> <param-value>*</param-value> <!-- Sensitive --> </init-param> </filter> Compliant Solution<!-- Tomcat 7+ Cors Filter --> <filter> <filter-name>CorsFilter</filter-name> <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> <init-param> <param-name>cors.allowed.origins</param-name> <param-value>https://trusted1.org,https://trusted2.org</param-value> <!-- Compliant --> </init-param> </filter> See
|
||||||||||||
flex:S1465 |
Why is this an issue?A Noncompliant code examplelocalConnection.allowDomain("*"); Compliant solutionlocalConnection.allowDomain("www.myDomain.com"); |
||||||||||||
flex:S1466 |
Why is this an issue?The Security.exactSettings value should remain set at the default value of true. Setting this value to false could make the SWF vulnerable to cross-domain attacks. Noncompliant code exampleSecurity.exactSettings = false; Compliant solutionSecurity.exactSettings = true; |
||||||||||||
flex:S1468 |
Why is this an issue?Calling Security.allowDomain("*") lets any domain cross-script into the domain of this SWF and exercise its functionality. Noncompliant code exampleSecurity.allowDomain("*"); Compliant solutionSecurity.allowDomain("www.myDomain.com"); |
||||||||||||
flex:S1951 |
This rule is deprecated; use S4507 instead. Why is this an issue?The Noncompliant code examplevar val:Number = doCalculation(); trace("Calculation result: " + val); // Noncompliant Compliant solutionvar val:Number = doCalculation(); Resources
|
||||||||||||
flex:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on production servers or applications distributed to end users. Sensitive Code Exampleif (unexpectedCondition) { Alert.show("Unexpected Condition"); // Sensitive } The var val:Number = doCalculation(); trace("Calculation result: " + val); // Sensitive See
|
||||||||||||
flex:S1442 |
This rule is deprecated; use S4507 instead. Why is this an issue?
Noncompliant code exampleif (unexpectedCondition) { Alert.show("Unexpected Condition"); } Resources
|
||||||||||||
java:S5852 |
Most of the regular expression engines use This rule determines the runtime complexity of a regular expression and informs you of the complexity if it is not linear. Note that, due to improvements to the matching algorithm, some cases of exponential runtime complexity have become impossible when run using JDK 9 or later. In such cases, an issue will only be reported if the project’s target Java version is 8 or earlier. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesTo avoid In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can
fail, causing the backtracking to actually happen. Note that when performing a full match (e.g. using
In order to rewrite your regular expression without these patterns, consider the following strategies:
Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when using partial matches, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:
Sensitive Code ExampleThe first regex evaluation will never end in java.util.regex.Pattern.compile("(a+)+").matcher( "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaa!").matches(); // Sensitive java.util.regex.Pattern.compile("(h|h|ih(((i|a|c|c|a|i|i|j|b|a|i|b|a|a|j))+h)ahbfhba|c|i)*").matcher( "hchcchicihcchciiicichhcichcihcchiihichiciiiihhcchi"+ "cchhcihchcihiihciichhccciccichcichiihcchcihhicchcciicchcccihiiihhihihihi"+ "chicihhcciccchihhhcchichchciihiicihciihcccciciccicciiiiiiiiicihhhiiiihchccch"+ "chhhhiiihchihcccchhhiiiiiiiicicichicihcciciihichhhhchihciiihhiccccccciciihh"+ "ichiccchhicchicihihccichicciihcichccihhiciccccccccichhhhihihhcchchihih"+ "iihhihihihicichihiiiihhhhihhhchhichiicihhiiiiihchccccchichci").matches(); // Sensitive Compliant SolutionPossessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues: java.util.regex.Pattern.compile("(a+)++").matcher( "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+ "aaaaaaaaaaaaaaa!").matches(); // Compliant java.util.regex.Pattern.compile("(h|h|ih(((i|a|c|c|a|i|i|j|b|a|i|b|a|a|j))+h)ahbfhba|c|i)*+").matcher( "hchcchicihcchciiicichhcichcihcchiihichiciiiihhcchi"+ "cchhcihchcihiihciichhccciccichcichiihcchcihhicchcciicchcccihiiihhihihihi"+ "chicihhcciccchihhhcchichchciihiicihciihcccciciccicciiiiiiiiicihhhiiiihchccch"+ "chhhhiiihchihcccchhhiiiiiiiicicichicihcciciihichhhhchihciiihhiccccccciciihh"+ "ichiccchhicchicihihccichicciihcichccihhiciccccccccichhhhihihhcchchihih"+ "iihhihihihicichihiiiihhhhihhhchhichiicihhiiiiihchccccchichci").matches(); // Compliant See
|
||||||||||||
java:S2115 |
When accessing a database, an empty password should be avoided as it introduces a weakness. Why is this an issue?When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials. What is the potential impact?Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains. Unauthorized Access to Sensitive DataWhen a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage. Compromise of System IntegrityWithout a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks. Unwanted Modifications or DeletionsThe absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences. Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm. How to fix it in Java SECode examplesThe following code uses an empty password to connect to a Postgres database. The vulnerability can be fixed by using a strong password retrieved from Properties. This Noncompliant code exampleConnection conn = DriverManager.getConnection("jdbc:derby:memory:myDB;create=true", "login", ""); // Noncompliant Compliant solutionString password = System.getProperty("database.password"); Connection conn = DriverManager.getConnection("jdbc:derby:memory:myDB;create=true", "login", password); PitfallsHard-coded passwordsIt could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:
To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase. ResourcesStandards |
||||||||||||
java:S3329 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV). If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, a company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Java Cryptography ExtensionCode examplesNoncompliant code exampleimport java.nio.charset.StandardCharsets; import java.security.NoSuchAlgorithmException; import java.security.InvalidKeyException; import java.security.InvalidAlgorithmParameterException; import javax.crypto.Cipher; import javax.crypto.spec.GCMParameterSpec; import javax.crypto.spec.SecretKeySpec; import javax.crypto.NoSuchPaddingException; public void encrypt(String key, String plainText) { byte[] RandomBytes = "7cVgr5cbdCZVw5WY".getBytes(StandardCharsets.UTF_8); GCMParameterSpec iv = new GCMParameterSpec(128, RandomBytes); SecretKeySpec keySpec = new SecretKeySpec(key.getBytes(StandardCharsets.UTF_8), "AES"); try { Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding"); cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv); // Noncompliant } catch(NoSuchAlgorithmException|InvalidKeyException| NoSuchPaddingException|InvalidAlgorithmParameterException e) { // ... } } Compliant solutionIn this example, the code explicitly uses a number generator that is considered strong. import java.nio.charset.StandardCharsets; import java.security.SecureRandom; import java.security.NoSuchAlgorithmException; import java.security.InvalidKeyException; import java.security.InvalidAlgorithmParameterException; import javax.crypto.Cipher; import javax.crypto.spec.GCMParameterSpec; import javax.crypto.spec.SecretKeySpec; import javax.crypto.NoSuchPaddingException; public void encrypt(String key, String plainText) { SecureRandom random = new SecureRandom(); byte[] randomBytes = new byte[16]; random.nextBytes(randomBytes); GCMParameterSpec iv = new GCMParameterSpec(128, randomBytes); SecretKeySpec keySpec = new SecretKeySpec(key.getBytes(StandardCharsets.UTF_8), "AES"); try { Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding"); cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv); } catch(NoSuchAlgorithmException|InvalidKeyException| NoSuchPaddingException|InvalidAlgorithmParameterException e) { // ... } } How does this work?Use unique IVsTo ensure high security, initialization vectors must meet two important criteria:
The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext. In the previous non-compliant example, the problem is not that the IV is hard-coded. ResourcesStandards
|
||||||||||||
java:S4502 |
A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application. The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleSpring Security provides by default a protection against CSRF attacks which can be disabled: @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable(); // Sensitive: csrf protection is entirely disabled // or http.csrf().ignoringAntMatchers("/route/"); // Sensitive: csrf protection is disabled for specific routes } } Compliant SolutionSpring Security CSRF protection is enabled by default, do not disable it: @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { // http.csrf().disable(); // Compliant } } See |
||||||||||||
java:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on production servers or applications distributed to end users. Sensitive Code Example
try { /* ... */ } catch(Exception e) { e.printStackTrace(); // Sensitive } EnableWebSecurity
annotation for SpringFramework with import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; @Configuration @EnableWebSecurity(debug = true) // Sensitive public class WebSecurityConfig extends WebSecurityConfigurerAdapter { // ... } WebView.setWebContentsDebuggingEnabled(true) for Android enables debugging support: import android.webkit.WebView; WebView.setWebContentsDebuggingEnabled(true); // Sensitive WebView.getFactory().getStatics().setWebContentsDebuggingEnabled(true); // Sensitive Compliant SolutionLoggers should be used (instead of try { /* ... */ } catch(Exception e) { LOGGER.log("context", e); } EnableWebSecurity
annotation for SpringFramework with import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; @Configuration @EnableWebSecurity(debug = false) public class WebSecurityConfig extends WebSecurityConfigurerAdapter { // ... } WebView.setWebContentsDebuggingEnabled(false) for Android disables debugging support: import android.webkit.WebView; WebView.setWebContentsDebuggingEnabled(false); WebView.getFactory().getStatics().setWebContentsDebuggingEnabled(false); See |
||||||||||||
java:S4512 |
Setting JavaBean properties is security sensitive. Doing it with untrusted values has led in the past to the following vulnerability: JavaBeans can have their properties or nested properties set by population functions. An attacker can leverage this feature to push into the JavaBean malicious data that can compromise the software integrity. A typical attack will try to manipulate the ClassLoader and finally execute malicious code. This rule raises an issue when:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSanitize all values used as JavaBean properties. Don’t set any sensitive properties. Keep full control over which properties are set. If the property names are provided by an unstrusted source, filter them with a whitelist. Sensitive Code ExampleCompany bean = new Company(); HashMap map = new HashMap(); Enumeration names = request.getParameterNames(); while (names.hasMoreElements()) { String name = (String) names.nextElement(); map.put(name, request.getParameterValues(name)); } BeanUtils.populate(bean, map); // Sensitive: "map" is populated with data coming from user input, here "request.getParameterNames()" See
|
||||||||||||
java:S4684 |
With Spring, when a request mapping method is configured to accept bean objects as arguments, the framework will automatically bind HTTP parameters to those objects' properties. If the targeted beans are also persistent entities, the framework will also store those properties in the storage backend, usually the application’s database. Why is this an issue?By accepting persistent entities as method arguments, the application allows clients to manipulate the object’s properties directly. What is the potential impact?Attackers could forge malicious HTTP requests that will alter unexpected properties of persistent objects. This can lead to unauthorized modifications of the entity’s state. This is known as a mass assignment attack. Depending on the affected objects and properties, the consequences can vary. Privilege escalationIf the affected object is used to store the client’s identity or permissions, the attacker could alter it to change their entitlement on the application. This can lead to horizontal or vertical privilege escalation. Security checks bypassBecause persistent objects are modified directly without prior logic, attackers could exploit this issue to bypass security measures otherwise enforced by the application. For example, an attacker might be able to change their e-mail address to an invalid one by directly setting it without going through the application’s email validation process. The same could also apply to passwords that attackers could change without complexity validation or knowledge of their current value. How to fix it in Java EECode examplesThe following code is vulnerable to a mass assignment attack because it allows modifying the Noncompliant code exampleimport javax.persistence.Entity; @Entity public class Wish { Long productId; Long quantity; Client client; } @Entity public class Client { String clientId; String name; String password; } import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; @Controller public class PurchaseOrderController { @RequestMapping(path = "/saveForLater", method = RequestMethod.POST) public String saveForLater(Wish wish) { // Noncompliant session.save(wish); } } Compliant solutionpublic class WishDTO { Long productId; Long quantity; Long clientId; } import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; @Controller public class PurchaseOrderController { @RequestMapping(path = "/saveForLater", method = RequestMethod.POST) public String saveForLater(WishDTO wish) { Wish persistentWish = new Wish(); persistentWish.productId = wish.productId persistentWish.quantity = wish.quantity persistentWish.client = getClientById(with.clientId) session.save(persistentWish); } } How does this work?The compliant code implements a Data Transfer Object (DTO) layer. Instead of accepting a persistent ResourcesDocumentation
Standards
Articles & blog postsOWASP O2 Platform Blog - Two Security Vulnerabilities in the Spring Framework’s MVC |
||||||||||||
java:S5659 |
This vulnerability allows forging of JSON Web Tokens to impersonate other users. Why is this an issue?JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature. What is the potential impact?When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities. Impersonation of usersJWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data. Unauthorized data accessWhen a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access. How to fix it in Java JWTCode examplesThe following code contains examples of JWT encoding and decoding without a strong cipher algorithm. Noncompliant code exampleimport io.jsonwebtoken.Jwts; public void encode() { Jwts.builder() .setSubject(USER_LOGIN) .compact(); // Noncompliant } import io.jsonwebtoken.Jwts; public void decode() { Jwts.parser() .setSigningKey(SECRET_KEY) .parse(token) .getBody(); // Noncompliant } Compliant solutionimport io.jsonwebtoken.Jwts; public void encode() { Jwts.builder() .setSubject(USER_LOGIN) .signWith(SignatureAlgorithm.HS256, SECRET_KEY) .compact(); } When using import io.jsonwebtoken.Jwts; public void decode() { Jwts.parser() .setSigningKey(SECRET_KEY) .parseClaimsJws(token) .getBody(); } How does this work?Always sign your tokensThe foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created. Choose a strong cipher algorithmIt is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens. Verify the signature of your tokensResolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose. Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked. To resolve the issue, follow these instructions:
By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process. Going the extra mileSecurely store your secret keysEnsure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services. Rotate your secret keysEven with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions. ResourcesStandards |
||||||||||||
java:S5542 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext. Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution. For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in Java Cryptography ExtensionCode examplesNoncompliant code exampleExample with a symmetric cipher, AES: import javax.crypto.Cipher; import java.security.NoSuchAlgorithmException; import javax.crypto.NoSuchPaddingException; public static void main(String[] args) { try { Cipher.getInstance("AES/CBC/PKCS5Padding"); // Noncompliant } catch(NoSuchAlgorithmException|NoSuchPaddingException e) { // ... } } Example with an asymmetric cipher, RSA: import javax.crypto.Cipher; import java.security.NoSuchAlgorithmException; import javax.crypto.NoSuchPaddingException; public static void main(String[] args) { try { Cipher.getInstance("RSA/None/NoPadding"); // Noncompliant } catch(NoSuchAlgorithmException|NoSuchPaddingException e) { // ... } } Compliant solutionFor the AES symmetric cipher, use the GCM mode: import javax.crypto.Cipher; import java.security.NoSuchAlgorithmException; import javax.crypto.NoSuchPaddingException; public static void main(String[] args) { try { Cipher.getInstance("AES/GCM/NoPadding"); } catch(NoSuchAlgorithmException|NoSuchPaddingException e) { // ... } } For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP): import javax.crypto.Cipher; import java.security.NoSuchAlgorithmException; import javax.crypto.NoSuchPaddingException; public static void main(String[] args) { try { Cipher.getInstance("RSA/ECB/OAEPWITHSHA-256ANDMGF1PADDING"); } catch(NoSuchAlgorithmException|NoSuchPaddingException e) { // ... } } How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. Appropriate choices are currently the following. For AES: use authenticated encryption modesThe best-known authenticated encryption mode for AES is Galois/Counter mode (GCM). GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data. Other similar modes are:
It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead. For RSA: use the OAEP schemeThe Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA. ResourcesArticles & blog posts
Standards
|
||||||||||||
java:S5547 |
This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. What is the potential impact?The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases. Additional attack surfaceBy modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. How to fix it in Java Cryptography ExtensionCode examplesThe following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided. Noncompliant code exampleimport javax.crypto.Cipher; import java.security.NoSuchAlgorithmException; import javax.crypto.NoSuchPaddingException; public static void main(String[] args) { try { Cipher des = Cipher.getInstance("DES"); // Noncompliant } catch(NoSuchAlgorithmException|NoSuchPaddingException e) { // ... } } Compliant solutionimport javax.crypto.Cipher; import java.security.NoSuchAlgorithmException; import javax.crypto.NoSuchPaddingException; public static void main(String[] args) { try { Cipher aes = Cipher.getInstance("AES/GCM/NoPadding"); } catch(NoSuchAlgorithmException|NoSuchPaddingException e) { // ... } } How does this work?Use a secure algorithmIt is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES). For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits. ResourcesStandards |
||||||||||||
java:S5301 |
ActiveMQ can send/receive JMS Object messages (ObjectMessage in ActiveMQ context) to comply with JMS specifications. Internally, ActiveMQ relies on Java’s serialization mechanism for the marshaling and unmarshalling of the messages' payload. Applications should restrict the types that can be unserialized from JMS messages. Why is this an issue?When the application does not implement controls over the JMS object types, its clients could be able to force the deserialization of arbitrary objects. This may lead to deserialization injection attacks. What is the potential impact?Attackers will be able to force the deserialization of arbitrary objects. This process will trigger the execution of magic unmarshalling methods on the object and its properties. With a specially crafted serialized object, the attackers can exploit those magic methods to achieve malicious purposes. While the exact impact depends on the types available in the execution context at the time of deserialization, such an attack can generally lead to the execution of arbitrary code on the application server. Application-specific attacksBy exploiting the behavior of some of the application-defined types and objects, the attacker could manage to affect the application’s business logic. The exact consequences will depend on the application’s nature:
Publicly-known exploitationIn some cases, depending on the library the application uses and their versions, there may exist publicly known deserialization attack payloads known as gadget chains. In general, they are designed to have severe consequences, such as:
Those attacks are independent of the application’s own logic and from the types it specifies. How to fix it in Java EECode examplesThe following code example is vulnerable to a deserialization injection attack because it allows the deserialization of arbitrary types from JMS messages. Noncompliant code exampleActiveMQConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616"); factory.setTrustAllPackages(true); // Noncompliant Compliant solutionActiveMQConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616"); factory.setTrustedPackages(Arrays.asList("org.mypackage1", "org.mypackage2")); How does this work?The noncompliant code example calls the While defining a short list of trusted types is generally the state-of-the-art solution to avoid deserialization injection attacks, it is important to ensure that the allowed classes and packages can not be used to exploit the issue. In that case, a vulnerability would still be present. Note that ActiveMQ, starting with version 5.12.2, forces developers to explicitly list packages that JMS messages can contain. This limits the risk
of successful exploitation. In versions before that one, calling the ResourcesDocumentation
Standards |
||||||||||||
java:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Java Cryptography ExtensionCode examplesNoncompliant code exampleimport javax.net.ssl.SSLContext; import java.security.NoSuchAlgorithmException; public static void main(String[] args) { try { SSLContext.getInstance("TLSv1.1"); // Noncompliant } catch (NoSuchAlgorithmException e) { // ... } } Compliant solutionimport javax.net.ssl.SSLContext; import java.security.NoSuchAlgorithmException; public static void main(String[] args) { try { SSLContext.getInstance("TLSv1.2"); } catch (NoSuchAlgorithmException e) { // ... } } How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards
|
||||||||||||
java:S4544 |
Using unsafe Jackson deserialization configuration is security-sensitive. It has led in the past to the following vulnerabilities: When Jackson is configured to allow Polymorphic Type Handling (aka PTH), formerly known as Polymorphic Deserialization, "deserialization gadgets" may allow an attacker to perform remote code execution. This rule raises an issue when:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleObjectMapper mapper = new ObjectMapper(); mapper.enableDefaultTyping(); // Sensitive @JsonTypeInfo(use = Id.CLASS) // Sensitive abstract class PhoneNumber { } See
|
||||||||||||
java:S5876 |
An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled. Why is this an issue?Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:
What is the potential impact?Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following: ImpersonationOnce an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf. Data BreachIf an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes. Privilege EscalationIn some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems. How to fix it in SpringCode examplesIn a Spring Security’s context, session fixation protection is enabled by default but can be disabled with Noncompliant code example@Override protected void configure(HttpSecurity http) throws Exception { http.sessionManagement() .sessionFixation().none(); // Noncompliant: the existing session will continue } Compliant solution@Override protected void configure(HttpSecurity http) throws Exception { http.sessionManagement() .sessionFixation().migrateSession(); } How does this work?The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process. Here’s how session fixation protection typically works:
By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process. In Spring, calling ResourcesDocumentationSession Fixation Attack Protection Standards |
||||||||||||
java:S2245 |
Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities: When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information. As the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleRandom random = new Random(); // Sensitive use of Random byte bytes[] = new byte[20]; random.nextBytes(bytes); // Check if bytes is used for hashing, encryption, etc... Compliant SolutionSecureRandom random = new SecureRandom(); // Compliant for security-sensitive use cases byte bytes[] = new byte[20]; random.nextBytes(bytes); See
|
||||||||||||
java:S3330 |
When a cookie is configured with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleIf you create a security-sensitive cookie in your JAVA code: Cookie c = new Cookie(COOKIENAME, sensitivedata); c.setHttpOnly(false); // Sensitive: this sensitive cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability By default the Cookie c = new Cookie(COOKIENAME, sensitivedata); // Sensitive: this sensitive cookie is created with the httponly flag not defined (by default set to false) and so it can be stolen easily in case of XSS vulnerability Compliant SolutionCookie c = new Cookie(COOKIENAME, sensitivedata); c.setHttpOnly(true); // Compliant: this sensitive cookie is protected against theft (HttpOnly=true) See
|
||||||||||||
java:S4426 |
This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms. Note that depending on the algorithm, the term key refers to a different mathematical property. For example:
If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext. In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in Java Cryptography ExtensionCode examplesThe following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm. Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm. Noncompliant code exampleHere is an example of a private key generation with RSA: import java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; public static void main(String[] args) { try { KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA"); keyPairGenerator.initialize(1024); // Noncompliant } catch (NoSuchAlgorithmException e) { // ... } } Here is an example of a private key generation with AES: import java.security.KeyGenerator; import java.security.NoSuchAlgorithmException; public static void main(String[] args) { try { KeyGenerator keyGenerator = KeyGenerator.getInstance("AES"); keyGenerator.initialize(64); // Noncompliant } catch (NoSuchAlgorithmException e) { // ... } } Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name: import java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; import java.security.InvalidAlgorithmParameterException; import java.security.spec.ECGenParameterSpec; public static void main(String[] args) { try { KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("EC"); ECGenParameterSpec ellipticCurveName = new ECGenParameterSpec("secp112r1"); // Noncompliant keyPairGenerator.initialize(ellipticCurveName); } catch (NoSuchAlgorithmException | InvalidAlgorithmParameterException e) { // ... } } Compliant solutionimport java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; public static void main(String[] args) { try { KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA"); keyPairGenerator.initialize(2048); } catch (NoSuchAlgorithmException e) { // ... } } import java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; public static void main(String[] args) { try { KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("AES"); keyPairGenerator.initialize(128); } catch (NoSuchAlgorithmException e) { // ... } } import java.security.KeyPairGenerator; import java.security.NoSuchAlgorithmException; import java.security.InvalidAlgorithmParameterException; import java.security.spec.ECGenParameterSpec; public static void main(String[] args) { try { KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("EC"); ECGenParameterSpec ellipticCurveName = new ECGenParameterSpec("secp256r1"); keyPairGenerator.initialize(ellipticCurveName); } catch (NoSuchAlgorithmException | InvalidAlgorithmParameterException e) { // ... } } How does this work?As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community. The appropriate choices are the following. RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem. In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible. AES (Advanced Encryption Standard)AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying
all possible keys. Currently, a minimum key size of 128 bits is recommended for AES. Elliptic Curve Cryptography (ECC)Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve
algorithms is mentioned directly in their names. For example, Currently, a minimum key size of 224 bits is recommended for EC-based algorithms. Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:
Going the extra milePre-Quantum CryptographyEncrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer. Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety. Resources
Articles & blog posts
Standards
|
||||||||||||
java:S2254 |
This function uses a session ID that is supplied by the client. Because of this, the ID may not be valid or might even be spoofed. Why is this an issue?According to the API documentation of the
The session ID it returns is either transmitted through a cookie or a URL parameter. This allows an end user to manually update the value of this session ID in an HTTP request. Due to the ability of the end-user to manually change the value, the session ID in the request should only be used by a servlet container (e.g. Tomcat or Jetty) to see if the value matches the ID of an existing session. If it does not, the user should be considered unauthenticated. What is the potential impact?Using a client-supplied session ID to manage sessions on the server side can potentially have an impact on the security of the application. Impersonation (through session fixation)If an attacker succeeds in fixing a user’s session to a session identifier that they know, then they can impersonate this victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf. How to fix it in Java EECode examplesIn both examples, a session ID is used to check whether a user’s session is still active. In the noncompliant example, the session ID supplied by the user is used. In the compliant example, the session ID defined by the server is used instead. Noncompliant code exampleif (isActiveSession(request.getRequestedSessionId())) { // Noncompliant // ... } Compliant solutionif (isActiveSession(request.getSession().getId())) { // ... } How does this work?The noncompliant example uses The compliant example instead uses the server’s session ID to verify if the session is active. Additionally, ResourcesDocumentation
Standards |
||||||||||||
java:S2257 |
The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has
been protected. Standard algorithms like This rule tracks creation of Recommended Secure Coding Practices
Sensitive Code Examplepublic class MyCryptographicAlgorithm extends MessageDigest { ... } Compliant SolutionMessageDigest digest = MessageDigest.getInstance("SHA-256"); See
|
||||||||||||
java:S4433 |
Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:
A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials. Why is this an issue?When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory. What is the potential impact?An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores. Authentication bypassIf attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider. In such a case, all users configured in the directory might see their identity and privileges taken over. Sensitive information leakIf attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information. Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider. If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law. How to fix itCode examplesThe following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism. Noncompliant code example// Set up the environment for creating the initial context Hashtable<String, Object> env = new Hashtable<String, Object>(); env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); env.put(Context.PROVIDER_URL, "ldap://localhost:389/o=JNDITutorial"); // Use anonymous authentication env.put(Context.SECURITY_AUTHENTICATION, "none"); // Noncompliant // Create the initial context DirContext ctx = new InitialDirContext(env); Compliant solution// Set up the environment for creating the initial context Hashtable<String, Object> env = new Hashtable<String, Object>(); env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); env.put(Context.PROVIDER_URL, "ldap://localhost:389/o=Example"); // Use simple authentication env.put(Context.SECURITY_AUTHENTICATION, "simple"); env.put(Context.SECURITY_PRINCIPAL, "cn=local, ou=Unit, o=Example"); env.put(Context.SECURITY_CREDENTIALS, getLDAPPassword()); // Create the initial context DirContext ctx = new InitialDirContext(env); ResourcesDocumentation
Standards |
||||||||||||
java:S4434 |
JNDI supports the deserialization of objects from LDAP directories, which can lead to remote code execution. This rule raises an issue when an LDAP search query is executed with Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to disable deserialization of LDAP objects. Sensitive Code ExampleDirContext ctx = new InitialDirContext(); // ... ctx.search(query, filter, new SearchControls(scope, countLimit, timeLimit, attributes, true, // Noncompliant; allows deserialization deref)); Compliant SolutionDirContext ctx = new InitialDirContext(); // ... ctx.search(query, filter, new SearchControls(scope, countLimit, timeLimit, attributes, false, // Compliant deref)); See
|
||||||||||||
java:S4790 |
Cryptographic hash algorithms such as Ask Yourself WhetherThe hashed value is used in a security context like:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSafer alternatives, such as Sensitive Code ExampleMessageDigest md1 = MessageDigest.getInstance("SHA"); // Sensitive: SHA is not a standard name, for most security providers it's an alias of SHA-1 MessageDigest md2 = MessageDigest.getInstance("SHA1"); // Sensitive Compliant SolutionMessageDigest md1 = MessageDigest.getInstance("SHA-512"); // Compliant See
|
||||||||||||
java:S4792 |
This rule is deprecated, and will eventually be removed. Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities: Logs are useful before, during and after a security incident.
Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged. This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:
Sensitive Code ExampleThis rule supports the following libraries: Log4J, // === Log4J 2 === import org.apache.logging.log4j.core.config.builder.api.ConfigurationBuilderFactory; import org.apache.logging.log4j.Level; import org.apache.logging.log4j.core.*; import org.apache.logging.log4j.core.config.*; // Sensitive: creating a new custom configuration abstract class CustomConfigFactory extends ConfigurationFactory { // ... } class A { void foo(Configuration config, LoggerContext context, java.util.Map<String, Level> levelMap, Appender appender, java.io.InputStream stream, java.net.URI uri, java.io.File file, java.net.URL url, String source, ClassLoader loader, Level level, Filter filter) throws java.io.IOException { // Creating a new custom configuration ConfigurationBuilderFactory.newConfigurationBuilder(); // Sensitive // Setting loggers level can result in writing sensitive information in production Configurator.setAllLevels("com.example", Level.DEBUG); // Sensitive Configurator.setLevel("com.example", Level.DEBUG); // Sensitive Configurator.setLevel(levelMap); // Sensitive Configurator.setRootLevel(Level.DEBUG); // Sensitive config.addAppender(appender); // Sensitive: this modifies the configuration LoggerConfig loggerConfig = config.getRootLogger(); loggerConfig.addAppender(appender, level, filter); // Sensitive loggerConfig.setLevel(level); // Sensitive context.setConfigLocation(uri); // Sensitive // Load the configuration from a stream or file new ConfigurationSource(stream); // Sensitive new ConfigurationSource(stream, file); // Sensitive new ConfigurationSource(stream, url); // Sensitive ConfigurationSource.fromResource(source, loader); // Sensitive ConfigurationSource.fromUri(uri); // Sensitive } } // === java.util.logging === import java.util.logging.*; class M { void foo(LogManager logManager, Logger logger, java.io.InputStream is, Handler handler) throws SecurityException, java.io.IOException { logManager.readConfiguration(is); // Sensitive logger.setLevel(Level.FINEST); // Sensitive logger.addHandler(handler); // Sensitive } } // === Logback === import ch.qos.logback.classic.util.ContextInitializer; import ch.qos.logback.core.Appender; import ch.qos.logback.classic.joran.JoranConfigurator; import ch.qos.logback.classic.spi.ILoggingEvent; import ch.qos.logback.classic.*; class M { void foo(Logger logger, Appender<ILoggingEvent> fileAppender) { System.setProperty(ContextInitializer.CONFIG_FILE_PROPERTY, "config.xml"); // Sensitive JoranConfigurator configurator = new JoranConfigurator(); // Sensitive logger.addAppender(fileAppender); // Sensitive logger.setLevel(Level.DEBUG); // Sensitive } } ExceptionsLog4J 1.x is not covered as it has reached end of life. See
|
||||||||||||
java:S5527 |
This vulnerability allows attackers to impersonate a trusted host. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security. When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. To do so, an attacker would obtain a valid certificate authenticating What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. How to fix it in Apache Commons EmailCode examplesThe following code contains examples of disabled hostname validation. The hostname validation gets disabled because Noncompliant code exampleimport org.apache.commons.mail.DefaultAuthenticator; import org.apache.commons.mail.Email; import org.apache.commons.mail.SimpleEmail; public void sendMail(String message) { Email email = new SimpleEmail(); email.setMsg(message); email.setSmtpPort(465); email.setAuthenticator(new DefaultAuthenticator(username, password)); email.setSSLOnConnect(true); // Noncompliant email.send(); } Compliant solutionimport org.apache.commons.mail.DefaultAuthenticator; import org.apache.commons.mail.Email; import org.apache.commons.mail.SimpleEmail; public void sendMail(String message) { Email email = new SimpleEmail(); email.setMsg(message); email.setSmtpPort(465); email.setAuthenticator(new DefaultAuthenticator(username, password)); email.setSSLCheckServerIdentity(true); email.setSSLOnConnect(true); email.send(); } How does this work?To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate. Use valid certificatesIf a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues. Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself. In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:
Here is a sample command to import a certificate to the Java trust store: keytool -import -alias myserver -file myserver.crt -keystore cacerts ResourcesStandards
|
||||||||||||
java:S2755 |
This vulnerability allows the usage of external entities in XML. Why is this an issue?External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack. What is the potential impact?Exposing sensitive dataOne significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information. Exhausting system resourcesAnother consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience. Forging requestsXXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure. How to fix it in Java SECode examplesThe following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed. Noncompliant code exampleDocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); // Noncompliant Compliant solutionProtection from XXE can be done in several different ways. Choose one depending on how the affected parser object is used in your code. 1. The first way is to completely disable // Applicable to: // - DocumentBuilderFactory // - SAXParserFactory // - SchemaFactory factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true); // For XMLInputFactory: factory.setProperty(XMLInputFactory.SUPPORT_DTD, false); 2. Disable external entity declarations completely: // Applicable to: // - DocumentBuilderFactory // - SAXParserFactory factory.setFeature("http://xml.org/sax/features/external-general-entities", false); factory.setFeature("http://xml.org/sax/features/external-parameter-entities", false); // For XMLInputFactory: factory.setProperty(XMLInputFactory.IS_SUPPORTING_EXTERNAL_ENTITIES, Boolean.FALSE); 3. Prohibit the use of all protocols by external entities: // `setAttribute` variant, applicable to: // - DocumentBuilderFactory // - TransformerFactory factory.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, ""); factory.setAttribute(XMLConstants.ACCESS_EXTERNAL_SCHEMA, ""); // `setProperty` variant, applicable to: // - XMLInputFactory // - SchemaFactory factory.setProperty(XMLConstants.ACCESS_EXTERNAL_DTD, ""); factory.setProperty(XMLConstants.ACCESS_EXTERNAL_SCHEMA, ""); // For SAXParserFactory, the prohibition is done on child objects: SAXParser parser = factory.newSAXParser(); parser.setProperty(XMLConstants.ACCESS_EXTERNAL_DTD, ""); parser.setProperty(XMLConstants.ACCESS_EXTERNAL_SCHEMA, ""); How does this work?Disable external entitiesThe most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework. If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved
during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are
processed. Going the extra mileDisable entity expansionSpecifically for factory.setExpandEntityReferences(false); ResourcesStandards |
||||||||||||
java:S2612 |
In Unix file system permissions, the " Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesThe most restrictive possible permissions should be assigned to files and directories. Sensitive Code Examplepublic void setPermissions(String filePath) { Set<PosixFilePermission> perms = new HashSet<PosixFilePermission>(); // user permission perms.add(PosixFilePermission.OWNER_READ); perms.add(PosixFilePermission.OWNER_WRITE); perms.add(PosixFilePermission.OWNER_EXECUTE); // group permissions perms.add(PosixFilePermission.GROUP_READ); perms.add(PosixFilePermission.GROUP_EXECUTE); // others permissions perms.add(PosixFilePermission.OTHERS_READ); // Sensitive perms.add(PosixFilePermission.OTHERS_WRITE); // Sensitive perms.add(PosixFilePermission.OTHERS_EXECUTE); // Sensitive Files.setPosixFilePermissions(Paths.get(filePath), perms); } public void setPermissionsUsingRuntimeExec(String filePath) { Runtime.getRuntime().exec("chmod 777 file.json"); // Sensitive } public void setOthersPermissionsHardCoded(String filePath ) { Files.setPosixFilePermissions(Paths.get(filePath), PosixFilePermissions.fromString("rwxrwxrwx")); // Sensitive } Compliant SolutionOn operating systems that implement POSIX standard. This will throw a public void setPermissionsSafe(String filePath) throws IOException { Set<PosixFilePermission> perms = new HashSet<PosixFilePermission>(); // user permission perms.add(PosixFilePermission.OWNER_READ); perms.add(PosixFilePermission.OWNER_WRITE); perms.add(PosixFilePermission.OWNER_EXECUTE); // group permissions perms.add(PosixFilePermission.GROUP_READ); perms.add(PosixFilePermission.GROUP_EXECUTE); // others permissions removed perms.remove(PosixFilePermission.OTHERS_READ); // Compliant perms.remove(PosixFilePermission.OTHERS_WRITE); // Compliant perms.remove(PosixFilePermission.OTHERS_EXECUTE); // Compliant Files.setPosixFilePermissions(Paths.get(filePath), perms); } See
|
||||||||||||
java:S3752 |
An HTTP method is safe when used to perform a read-only operation, such as retrieving information. In contrast, an unsafe HTTP method is used to change the state of an application, for instance to update a user’s profile on a web application. Common safe HTTP methods are GET, HEAD, or OPTIONS. Common unsafe HTTP methods are POST, PUT and DELETE. Allowing both safe and unsafe HTTP methods to perform a specific operation on a web application could impact its security, for example CSRF protections are most of the time only protecting operations performed by unsafe HTTP methods. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesFor all the routes/controllers of an application, the authorized HTTP methods should be explicitly defined and safe HTTP methods should only be used to perform read-only operations. Sensitive Code Example@RequestMapping("/delete_user") // Sensitive: by default all HTTP methods are allowed public String delete1(String username) { // state of the application will be changed here } @RequestMapping(path = "/delete_user", method = {RequestMethod.GET, RequestMethod.POST}) // Sensitive: both safe and unsafe methods are allowed String delete2(@RequestParam("id") String id) { // state of the application will be changed here } Compliant Solution@RequestMapping("/delete_user", method = RequestMethod.POST) // Compliant public String delete1(String username) { // state of the application will be changed here } @RequestMapping(path = "/delete_user", method = RequestMethod.POST) // Compliant String delete2(@RequestParam("id") String id) { // state of the application will be changed here } See
|
||||||||||||
java:S4601 |
Spring Framework, and, more precisely, the Spring Security component, allows setting up access control checks at the URI level. This is done by adding request matchers to the security configuration, each authorizing access to some resources depending on the incoming request entitlement. Similarly to firewall filtering rules, the order in which those matchers are defined is security relevant. Why is this an issue?Configured URL matchers are considered in the order they are declared. Especially, for a given resource, if a looser filter is defined before a stricter one, only the less secure configuration will apply. No request will ever reach the stricter rule. This rule raises an issue when:
What is the potential impact?Access control rules that have been defined but cannot be applied generally indicate an error in the filtering process. In most cases, this will have consequences on the application’s authorization and authentication mechanisms. Authentication bypassWhen the ignored access control rule is supposed to enforce the authentication on a resource, the consequence is a bypass of the authentication for that resource. Depending on the scope of the ignored rule, a single feature or whole sections of the application can be left unprotected. Attackers could take advantage of such an issue to access the affected features without prior authentication, which may impact the confidentiality or integrity of sensitive, business, or personal data. Privilege escalationWhen the ignored access control rule is supposed to verify the role of an authenticated user, the consequence is a privilege escalation or authorization bypass. An authenticated user with low privileges on the application will be able to access more critical features or sections of the application. This could have financial consequences if the accessed features are normally accessed by paying users. It could also impact the confidentiality or integrity of sensitive, business, or personal data, depending on the features. How to fix it in SpringCode examplesThe following code is vulnerable because it defines access control configuration in the wrong order. Noncompliant code exampleprotected void configure(HttpSecurity http) throws Exception { http.authorizeRequests() .antMatchers("/resources/**", "/signup", "/about").permitAll() .antMatchers("/admin/**").hasRole("ADMIN") .antMatchers("/admin/login").permitAll() // Noncompliant .antMatchers("/**", "/home").permitAll() .antMatchers("/db/**").access("hasRole('ADMIN') and hasRole('DBA')") // Noncompliant .and().formLogin().loginPage("/login").permitAll().and().logout().permitAll(); } Compliant solutionprotected void configure(HttpSecurity http) throws Exception { http.authorizeRequests() .antMatchers("/resources/**", "/signup", "/about").permitAll() .antMatchers("/admin/login").permitAll() .antMatchers("/admin/**").hasRole("ADMIN") .antMatchers("/db/**").access("hasRole('ADMIN') and hasRole('DBA')") .antMatchers("/**", "/home").permitAll() .and().formLogin().loginPage("/login").permitAll().and().logout().permitAll(); } ResourcesDocumentation
Standards |
||||||||||||
java:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code ExampleString ip = "192.168.12.42"; // Sensitive Socket socket = new Socket(ip, 6667); Compliant SolutionString ip = System.getenv("IP_ADDRESS"); // Compliant Socket socket = new Socket(ip, 6667); ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
java:S2647 |
This rule is deprecated, and will eventually be removed. Basic authentication is a vulnerable method of user authentication that should be avoided. It functions by transmitting a Base64 encoded username and password. As Base64 is easy to recognize and reverse, sensitive data may be leaked this way. Why is this an issue?Basic authentication is a simple and widely used method of user authentication for HTTP requests. When a client sends a request to a server that requires authentication, the client includes the username and password (concatenated together and Base64 encoded) in the "Authorization" header of the HTTP request. The server verifies the credentials and grants access if they are valid. Every request sent to the server to a protected endpoint must include these credentials. Basic authentication is considered insecure for several reasons:
These security limitations make basic authentication an insecure choice for authentication or authorization over HTTP. What is the potential impact?Basic authentication transmits passwords in plain text, which makes it vulnerable to interception by attackers. Session hijacking and man-in-the-middle attackIf an attacker gains access to the network traffic, they can easily capture the username and password. Basic authentication does not provide any mechanism to protect against session hijacking attacks. Once a user is authenticated, the session identifier (the username and password) is sent in clear text with each subsequent request. If attackers can intercept one request, they can use it to impersonate the authenticated user, gaining unauthorized access to their account and potentially performing malicious actions. Brute-force attacksBasic authentication does not have any built-in protection against brute-force attacks. Attackers can repeatedly guess passwords until they find the correct one, especially if weak or commonly used passwords are used. This can lead to unauthorized access to user accounts and potential data breaches. How to fix it in Java SECode examplesThe following code uses basic authentication to send out an HTTP request to a protected endpoint. Noncompliant code exampleString encoded = Base64.getEncoder().encodeToString("login:passwd".getBytes()); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("POST"); conn.setDoOutput(true); conn.setRequestProperty("Authorization", "Basic " + encoded); // Noncompliant Compliant solution// An access token should be retrieved before the HTTP request String accessToken = System.getenv("ACCESS_TOKEN"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("POST"); conn.setDoOutput(true); conn.setRequestProperty("Authorization", "Bearer " + accessToken); How does this work?Token-based authentication and OAuthToken-based authentication is a safer alternative than basic authentication. A unique token is generated upon successful authentication and sent to the client, which is then included in subsequent requests. Therefore, it eliminates the need to transmit sensitive credentials with each request. OAuth also works by authenticating users via tokens. It gives even more flexibility on top of this by offering scopes, which limit an application’s access to a user’s account. Additionally, both token-based authentication and OAuth support mechanisms for token expiration, revocation, and refresh. This gives more flexibility than basic authentication, as compromised tokens carry much less risk than a compromised password. SSL encryption for HTTP requestsWith basic authentication, user credentials are transmitted in plain text, which makes them vulnerable to interception and eavesdropping. However, when HTTPS is employed, the data is encrypted before transmission, making it significantly more difficult for attackers to intercept and decipher the credentials. If no other form of authentication is possible for this code, then every request must be sent over HTTPS to ensure credentials are kept safe. ResourcesDocumentation
Standards |
||||||||||||
java:S4830 |
This vulnerability makes it possible that an encrypted communication is intercepted. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be. When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. Loss of data integrityWhen TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system. How to fix it in Java Cryptography ExtensionCode examplesThe following code contains examples of disabled certificate validation. The certificate validation gets disabled by overriding Noncompliant code exampleclass TrustAllManager implements X509TrustManager { @Override public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException { // Noncompliant } @Override public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException { // Noncompliant } @Override public X509Certificate[] getAcceptedIssuers() { return null; } } How does this work?Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation. To avoid running into problems with invalid certificates, consider the following sections. Using trusted certificatesIf possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration. Working with self-signed certificates or non-standard CAsIn some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store. Here is a sample command to import a certificate to the Java trust store: keytool -import -alias myserver -file myserver.crt -keystore cacerts ResourcesStandards
|
||||||||||||
java:S2658 |
This rule is deprecated; use S6173 instead. Why is this an issue?Dynamically loaded classes could contain malicious code executed by a static class initializer. I.E. you wouldn’t even have to instantiate or explicitly invoke methods on such classes to be vulnerable to an attack. This rule raises an issue for each use of dynamic class loading. Noncompliant code exampleString className = System.getProperty("messageClassName"); Class clazz = Class.forName(className); // Noncompliant Resources |
||||||||||||
java:S5804 |
User enumeration refers to the ability to guess existing usernames in a web application database. This can happen, for example, when using "sign-in/sign-on/forgot password" functionalities of a website. When an user tries to "sign-in" to a website with an incorrect username/login, the web application should not disclose that the username doesn’t exist with a message similar to "this username is incorrect", instead a generic message should be used like "bad credentials", this way it’s not possible to guess whether the username or password was incorrect during the authentication. If a user-management feature discloses information about the existence of a username, attackers can use brute force attacks to retrieve a large amount of valid usernames that will impact the privacy of corresponding users and facilitate other attacks (phishing, password guessing etc …). Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesWhen a user performs a request involving a username, it should not be possible to spot differences between a valid and incorrect username:
Sensitive Code ExampleIn a Spring-security web application the username leaks when:
public String authenticate(String username, String password) { // .... MyUserDetailsService s1 = new MyUserDetailsService(); MyUserPrincipal u1 = s1.loadUserByUsername(username); if(u1 == null) { throw new BadCredentialsException(username+" doesn't exist in our database"); // Sensitive } // .... }
public String authenticate(String username, String password) { // .... if(user == null) { throw new UsernameNotFoundException("user not found"); // Sensitive } // .... }
DaoAuthenticationProvider daoauth = new DaoAuthenticationProvider(); daoauth.setUserDetailsService(new MyUserDetailsService()); daoauth.setPasswordEncoder(new BCryptPasswordEncoder()); daoauth.setHideUserNotFoundExceptions(false); // Sensitive builder.authenticationProvider(daoauth); Compliant SolutionIn a Spring-security web application:
public String authenticate(String username, String password) throws AuthenticationException { Details user = null; try { user = loadUserByUsername(username); } catch (UsernameNotFoundException | DataAccessException e) { // Hide this exception reason to not disclose that the username doesn't exist } if (user == null || !user.isPasswordCorrect(password)) { // User should not be able to guess if the bad credentials message is related to the username or the password throw new BadCredentialsException("Bad credentials"); } }
DaoAuthenticationProvider daoauth = new DaoAuthenticationProvider(); daoauth.setUserDetailsService(new MyUserDetailsService()); daoauth.setPasswordEncoder(new BCryptPasswordEncoder()); daoauth.setHideUserNotFoundExceptions(true); // Compliant builder.authenticationProvider(daoauth); See |
||||||||||||
java:S5808 |
When granting users access to resources of an application, such an authorization should be based on strong decisions. For instance, a user may be authorized to access a resource only if they are authenticated, or if they have the correct role and privileges. Why is this an issue?Access control is a critical aspect of web frameworks that ensures proper authorization and restricts access to sensitive resources or actions. To enable access control, web frameworks offer components that are responsible for evaluating user permissions and making access control decisions. They might examine the user’s credentials, such as roles or privileges, and compare them against predefined rules or policies to determine whether the user should be granted access to a specific resource or action. Conventionally, these checks should never grant access to every request received. If an endpoint or component is meant to be public, then it should be ignored by access control components. Conversely, if an endpoint should deny some users from accessing it, then access control has to be configured correctly for this endpoint. Granting unrestricted access to all users can lead to security vulnerabilities and potential misuse of critical functionalities. It is important to carefully assess access decisions based on factors such as user roles, resource sensitivity, and business requirements. Implementing a robust and granular access control mechanism is crucial for the security and integrity of the web application itself and its surrounding environment. What is the potential impact?Not verifying user access strictly can introduce significant security risks. Some of the most prominent risks are listed below. Depending on the use case, it is very likely that other risks are introduced on top of the ones listed. Unauthorized accessAs the access of users is not checked strictly, it becomes very easy for an attacker to gain access to restricted areas or functionalities, potentially compromising the confidentiality, integrity, and availability of sensitive resources. They may exploit this access to perform malicious actions, such as modifying or deleting data, impersonating legitimate users, or gaining administrative privileges, ultimately compromising the security of the system. Theft of sensitive dataTheft of sensitive data can result from incorrect access control if attackers manage to gain access to databases, file systems, or other storage mechanisms where sensitive data is stored. This can lead to the theft of personally identifiable information (PII), financial data, intellectual property, or other confidential information. The stolen data can be used for various malicious purposes, such as identity theft, financial fraud, or selling the data on the black market, causing significant harm to individuals and organizations affected by the breach. How to fix it in SpringCode examplesNoncompliant code exampleThe public class WeakNightVoter implements AccessDecisionVoter { @Override public int vote(Authentication authentication, Object object, Collection collection) { Calendar calendar = Calendar.getInstance(); int currentHour = calendar.get(Calendar.HOUR_OF_DAY); if (currentHour >= 8 && currentHour <= 19) { return ACCESS_GRANTED; } return ACCESS_ABSTAIN; // Noncompliant: when users connect during the night, no decision is made } } The public class MyPermissionEvaluator implements PermissionEvaluator { @Override public boolean hasPermission(Authentication authentication, Object targetDomainObject, Object permission) { Object user = authentication.getPrincipal(); if (user.getRole().equals(permission)) { return true; } return true; // Noncompliant } } Compliant solutionThe public class StrongNightVoter implements AccessDecisionVoter { @Override public int vote(Authentication authentication, Object object, Collection collection) { Calendar calendar = Calendar.getInstance(); int currentHour = calendar.get(Calendar.HOUR_OF_DAY); if (currentHour >= 8 && currentHour <= 19) { return ACCESS_GRANTED; } return ACCESS_DENIED; // Users are not allowed to connect during the night } } The public class MyPermissionEvaluator implements PermissionEvaluator { @Override public boolean hasPermission(Authentication authentication, Object targetDomainObject, Object permission) { Object user = authentication.getPrincipal(); if (user.getRole().equals(permission)) { return true; } return false; } } ResourcesStandards |
||||||||||||
java:S6263 |
In AWS, long-term access keys will be valid until you manually revoke them. This makes them highly sensitive as any exposure can have serious consequences and should be used with care. This rule will trigger when encountering an instantiation of Ask Yourself Whether
For more information, see Use IAM roles instead of long-term access keys. There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesConsider using IAM roles or other features of the AWS Security Token Service that provide temporary credentials, limiting the risks. Sensitive Code Exampleimport com.amazonaws.auth.AWSCredentials; import com.amazonaws.auth.BasicAWSCredentials; // ... AWSCredentials awsCredentials = new BasicAWSCredentials(accessKeyId, secretAccessKey); Compliant SolutionExample for AWS STS (see Getting Temporary Credentials with AWS STS). BasicSessionCredentials sessionCredentials = new BasicSessionCredentials( session_creds.getAccessKeyId(), session_creds.getSecretAccessKey(), session_creds.getSessionToken()); See |
||||||||||||
java:S6362 |
WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered. In the context of a WebView, JavaScript code can exfiltrate local files that might be sensitive or even worse, access exposed functions of the application that can result in more severe vulnerabilities such as code injection. Thus JavaScript support should not be enabled for WebViews unless it is absolutely necessary and the authenticity of the web resources can be guaranteed. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to disable JavaScript support for WebViews unless it is necessary to execute JavaScript code. Only trusted pages should be rendered. Sensitive Code Exampleimport android.webkit.WebView; WebView webView = (WebView) findViewById(R.id.webview); webView.getSettings().setJavaScriptEnabled(true); // Sensitive Compliant Solutionimport android.webkit.WebView; WebView webView = (WebView) findViewById(R.id.webview); webView.getSettings().setJavaScriptEnabled(false); See |
||||||||||||
java:S6363 |
WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered. If malicious JavaScript code in a WebView is executed this can leak the contents of sensitive files when access to local files is enabled. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to disable access to local files for WebViews unless it is necessary. In the case of a successful attack through a Cross-Site Scripting vulnerability the attackers attack surface decreases drastically if no files can be read out. Sensitive Code Exampleimport android.webkit.WebView; WebView webView = (WebView) findViewById(R.id.webview); webView.getSettings().setAllowFileAccess(true); // Sensitive webView.getSettings().setAllowContentAccess(true); // Sensitive Compliant Solutionimport android.webkit.WebView; WebView webView = (WebView) findViewById(R.id.webview); webView.getSettings().setAllowFileAccess(false); webView.getSettings().setAllowContentAccess(false); See |
||||||||||||
java:S5042 |
Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes). Ask Yourself WhetherArchives to expand are untrusted and:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleFile f = new File("ZipBomb.zip"); ZipFile zipFile = new ZipFile(f); Enumeration<? extends ZipEntry> entries = zipFile.entries(); // Sensitive while(entries.hasMoreElements()) { ZipEntry ze = entries.nextElement(); File out = new File("./output_onlyfortesting.txt"); Files.copy(zipFile.getInputStream(ze), out.toPath(), StandardCopyOption.REPLACE_EXISTING); } Compliant SolutionDo not rely on getsize to retrieve the size of an uncompressed entry because this method returns what is defined in the archive headers which can be forged by attackers, instead calculate the actual entry size when unzipping it: File f = new File("ZipBomb.zip"); ZipFile zipFile = new ZipFile(f); Enumeration<? extends ZipEntry> entries = zipFile.entries(); int THRESHOLD_ENTRIES = 10000; int THRESHOLD_SIZE = 1000000000; // 1 GB double THRESHOLD_RATIO = 10; int totalSizeArchive = 0; int totalEntryArchive = 0; while(entries.hasMoreElements()) { ZipEntry ze = entries.nextElement(); InputStream in = new BufferedInputStream(zipFile.getInputStream(ze)); OutputStream out = new BufferedOutputStream(new FileOutputStream("./output_onlyfortesting.txt")); totalEntryArchive ++; int nBytes = -1; byte[] buffer = new byte[2048]; int totalSizeEntry = 0; while((nBytes = in.read(buffer)) > 0) { // Compliant out.write(buffer, 0, nBytes); totalSizeEntry += nBytes; totalSizeArchive += nBytes; double compressionRatio = totalSizeEntry / ze.getCompressedSize(); if(compressionRatio > THRESHOLD_RATIO) { // ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack break; } } if(totalSizeArchive > THRESHOLD_SIZE) { // the uncompressed data size is too much for the application resource capacity break; } if(totalEntryArchive > THRESHOLD_ENTRIES) { // too much entries in this archive, can lead to inodes exhaustion of the system break; } } See
|
||||||||||||
java:S6373 |
XML standard allows the inclusion of XML files with the Why is this an issue?When the XML parser will encounter an The files that can be accessed that way are only limited by the entitlement of the application on the local system and the network filtering the server is subject to. This issue is particularly severe when the XML parser is used to parse untrusted documents. For example, when user-submitted XML messages are parsed that way. What is the potential impact?Allowing the inclusion of arbitrary files in XML documents can have two main consequences depending on what type of file is included: local or remote. Sensitive file disclosureIf the application allows the inclusion of arbitrary files through the use of the This is particularly true if the affected parser is used to process untrusted XML documents. Server-side request forgeryWhen used to retrieve remote files, the application will send network requests to remote hosts. Moreover, it will do so from its current network location, which can have severe consequences if the application server is located on a sensitive network, such as the company corporate network or a DMZ hosting other applications. Attackers exploiting this issue could try to access internal backend services or corporate file shares. It could allow them to access more sensitive files, bypass authentication mechanisms from frontend applications, or exploit further vulnerabilities in the local services. Note that, in some cases, the requests sent from the application can be automatically authenticated on federated locations. This is often the case in Windows environments when using Active Directory federated authentication. How to fix it in Java SECode examplesThe following code is vulnerable because it explicitly enables the Noncompliant code exampleimport javax.xml.parsers.SAXParserFactory; SAXParserFactory factory = SAXParserFactory.newInstance(); factory.setXIncludeAware(true); // Noncompliant factory.setFeature("http://apache.org/xml/features/xinclude", true); // Noncompliant Compliant solutionimport javax.xml.parsers.SAXParserFactory; SAXParserFactory factory = SAXParserFactory.newInstance(); factory.setXIncludeAware(false); factory.setFeature("http://apache.org/xml/features/xinclude", false); ResourcesDocumentation
Standards |
||||||||||||
java:S6374 |
This rule is deprecated; use S2755 instead. Why is this an issue?By default XML processors attempt to load all XML schemas and DTD (their locations are defined with Noncompliant code exampleFor DocumentBuilder, SAXParser and Schema JAPX factories: DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setValidating(true); // Noncompliant factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant SAXParserFactory factory = SAXParserFactory.newInstance(); factory.setValidating(true); // Noncompliant factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); schemaFactory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant For Dom4j library: SAXReader xmlReader = new SAXReader(); // Noncompliant xmlReader.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant For Jdom2 library: SAXBuilder builder = new SAXBuilder(); builder.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant Compliant solutionFor DocumentBuilder, SAXParser and Schema JAPX factories: DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); SAXParserFactory factory = SAXParserFactory.newInstance(); factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); schemaFactory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); For Dom4j library: SAXReader xmlReader = new SAXReader(); // Noncompliant xmlReader.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); For Jdom2 library: SAXBuilder builder = new SAXBuilder(); builder.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); ExceptionsThis rules does not raise an issue when an DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setValidating(true); DocumentBuilder builder = factory.newDocumentBuilder(); builder.setEntityResolver(new MyEntityResolver()); SAXBuilder builder = new SAXBuilder(); builder.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); builder.setEntityResolver(new EntityResolver()); Resources
|
||||||||||||
java:S6376 |
XML parsers Denial of Service attacks target XML parsers, which are software components responsible for parsing and interpreting XML documents. Why is this an issue?XML files are complex data structures. When a malicious user is able to submit an XML file, it triggers complex processing that may overwhelm the parser. Most of the time, those complex processing are enabled by default, and XML parsers do not take preventive measures against Denial of Service attacks. What is the potential impact?When an attacker successfully exploits the vulnerability, it can lead to a Denial of Service (DoS) condition. System UnavailabilityAffected system becomes unresponsive or crashes, rendering it unavailable to legitimate users. This can have severe consequences, especially for critical systems that rely on continuous availability, such as web servers, APIs, or network services. Amplification AttacksIn some cases, XML parsers Denial of Service attacks can be used as a part of larger-scale amplification attacks. By leveraging the vulnerability, attackers can generate a disproportionately large response from the targeted system, amplifying the impact of their attack. This can result in overwhelming network bandwidth and causing widespread disruption. How to fix it in Java SECode examplesNoncompliant code exampleimport javax.xml.parsers.DocumentBuilderFactory; DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, false); // Noncompliant Compliant solutionimport javax.xml.parsers.DocumentBuilderFactory; DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true); ResourcesDocumentation
Standards |
||||||||||||
java:S6377 |
XML signatures are a method used to ensure the integrity and authenticity of XML documents. However, if XML signatures are not validated securely, it can lead to potential vulnerabilities. Why is this an issue?Before Java 17, XML Digital Signature API does not apply restrictions on XML signature validation unless the application runs with a security manager, which is rare. What is the potential impactBy not enforcing secure validation, the XML Digital Signature API is more susceptible to attacks such as signature spoofing and injections. Increased Vulnerability to Signature SpoofingBy disabling secure validation, the application becomes more susceptible to signature spoofing attacks. Attackers can potentially manipulate the XML signature in a way that bypasses the validation process, allowing them to forge or tamper with the signature. This can lead to the acceptance of invalid or maliciously modified signatures, compromising the integrity and authenticity of the XML documents. Risk of Injection AttacksDisabling secure validation can expose the application to injection attacks. Attackers can inject malicious code or entities into the XML document, taking advantage of the weakened validation process. In some cases, it can also expose the application to denial-of-service attacks. Attackers can exploit vulnerabilities in the validation process to cause excessive resource consumption or system crashes, leading to service unavailability or disruption. How to fix it in Java SECode examplesFor versions of Java before 17, secure validation is disabled by default unless the application runs with a security manager, which is rare. It
should be enabled explicitly by setting the For Java 17 and higher, secure validation is enabled by default. Noncompliant code exampleNodeList signatureElement = doc.getElementsByTagNameNS(XMLSignature.XMLNS, "Signature"); XMLSignatureFactory fac = XMLSignatureFactory.getInstance("DOM"); DOMValidateContext valContext = new DOMValidateContext(new KeyValueKeySelector(), signatureElement.item(0)); // Noncompliant XMLSignature signature = fac.unmarshalXMLSignature(valContext); boolean signatureValidity = signature.validate(valContext); Compliant solutionNodeList signatureElement = doc.getElementsByTagNameNS(XMLSignature.XMLNS, "Signature"); XMLSignatureFactory fac = XMLSignatureFactory.getInstance("DOM"); DOMValidateContext valContext = new DOMValidateContext(new KeyValueKeySelector(), signatureElement.item(0)); valContext.setProperty("org.jcp.xml.dsig.secureValidation", Boolean.TRUE); XMLSignature signature = fac.unmarshalXMLSignature(valContext); boolean signatureValidity = signature.validate(valContext); How does this work?When XML Signature secure validation mode is enabled, XML Signatures are processed more securely. It enforces a number of restrictionsto to protect from XML Documents that may contain hostile constructs that can cause denial-of-service or other types of security issues. These restrictions can protect you from XML Signatures that may contain potentially hostile constructs that can cause denial-of-service or other types of security issues. ResourcesDocumentation
Standards |
||||||||||||
java:S1989 |
Why is this an issue?Servlets are components in Java web development, responsible for processing HTTP requests and generating responses. In this context, exceptions are used to handle and manage unexpected errors or exceptional conditions that may occur during the execution of a servlet. Catching exceptions within the servlet allows us to convert them into meaningful, user-friendly messages. Otherwise, failing to catch exceptions will propagate them to the servlet container, where the default error-handling mechanism may impact the overall security and stability of the server. Possible security problems are:
Unfortunately, servlet method signatures do not force developers to handle public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { } To prevent this risk, this rule enforces all exceptions to be caught within the "do*" methods of servlet classes. How to fix itSurround all method calls that may throw an exception with a Code examplesIn the following example, the Noncompliant code examplepublic void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { InetAddress addr = InetAddress.getByName(request.getRemoteAddr()); // Noncompliant //... } Compliant solutionpublic void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { try { InetAddress addr = InetAddress.getByName(request.getRemoteAddr()); //... } catch (UnknownHostException ex) { // Compliant //... } } ResourcesArticles & blog posts
|
||||||||||||
java:S6288 |
Android KeyStore is a secure container for storing key materials, in particular it prevents key materials extraction, i.e. when the application process is compromised, the attacker cannot extract keys but may still be able to use them. It’s possible to enable an Android security feature, user authentication, to restrict usage of keys to only authenticated users. The lock screen has to be unlocked with defined credentials (pattern/PIN/password, biometric). Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to enable user authentication (by setting Sensitive Code ExampleAny user can use the key: KeyGenerator keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore"); KeyGenParameterSpec builder = new KeyGenParameterSpec.Builder("test_secret_key_noncompliant", KeyProperties.PURPOSE_ENCRYPT | KeyProperties.PURPOSE_DECRYPT) // Noncompliant .setBlockModes(KeyProperties.BLOCK_MODE_GCM) .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE) .build(); keyGenerator.init(builder); Compliant SolutionThe use of the key is limited to authenticated users (for a duration of time defined to 60 seconds): KeyGenerator keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore"); KeyGenParameterSpec builder = new KeyGenParameterSpec.Builder("test_secret_key", KeyProperties.PURPOSE_ENCRYPT | KeyProperties.PURPOSE_DECRYPT) .setBlockModes(KeyProperties.BLOCK_MODE_GCM) .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE) .setUserAuthenticationRequired(true) .setUserAuthenticationParameters (60, KeyProperties.AUTH_DEVICE_CREDENTIAL) .build(); keyGenerator.init(builder) See
|
||||||||||||
java:S6291 |
Storing data locally is a common task for mobile applications. Such data includes preferences or authentication tokens for external services, among other things. There are many convenient solutions that allow storing data persistently, for example SQLiteDatabase, SharedPreferences, and Realm. By default these systems store the data unencrypted, thus an attacker with physical access to the device can read them out easily. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to password-encrypt local databases that contain sensitive information. Most systems provide secure alternatives to plain-text storage that should be used. If no secure alternative is available the data can also be encrypted manually before it is stored. The encryption password should not be hard-coded in the application. There are different approaches how the password can be provided to encrypt and
decrypt the database. In the case of Sensitive Code ExampleFor SQLiteDatabase: SQLiteDatabase db = activity.openOrCreateDatabase("test.db", Context.MODE_PRIVATE, null); // Sensitive For SharedPreferences: SharedPreferences pref = activity.getPreferences(Context.MODE_PRIVATE); // Sensitive For Realm: RealmConfiguration config = new RealmConfiguration.Builder().build(); Realm realm = Realm.getInstance(config); // Sensitive Compliant SolutionInstead of SQLiteDatabase you can use SQLCipher: SQLiteDatabase db = SQLiteDatabase.openOrCreateDatabase("test.db", getKey(), null); Instead of SharedPreferences you can use EncryptedSharedPreferences: String masterKeyAlias = new MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC); EncryptedSharedPreferences.create( "secret", masterKeyAlias, context, EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV, EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM ); For Realm an encryption key can be specified in the config: RealmConfiguration config = new RealmConfiguration.Builder() .encryptionKey(getKey()) .build(); Realm realm = Realm.getInstance(config); See
|
||||||||||||
java:S6293 |
Android comes with Android KeyStore, a secure container for storing key materials. It’s possible to define certain keys to be unlocked when users authenticate using biometric credentials. This way, even if the application process is compromised, the attacker cannot access keys, as presence of the authorized user is required. These keys can be used, to encrypt, sign or create a message authentication code (MAC) as proof that the authentication result has not been
tampered with. This protection defeats the scenario where an attacker with physical access to the device would try to hook into the application
process and call the Ask Yourself WhetherThe application contains:
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesIt’s recommended to tie the biometric authentication to a cryptographic operation by using a Sensitive Code ExampleA // ... BiometricPrompt biometricPrompt = new BiometricPrompt(activity, executor, callback); // ... biometricPrompt.authenticate(promptInfo); // Noncompliant Compliant SolutionA // ... BiometricPrompt biometricPrompt = new BiometricPrompt(activity, executor, callback); // ... biometricPrompt.authenticate(promptInfo, new BiometricPrompt.CryptoObject(cipher)); // Compliant See
|
||||||||||||
java:S2068 |
Because it is easy to extract strings from an application source code or binary, passwords should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Passwords should be stored outside of the code in a configuration file, a database, or a password management service. This rule flags instances of hard-coded passwords used in database and LDAP connections. It looks for hard-coded passwords in connection strings, and for variable names that match any of the patterns from the provided list. Ask Yourself Whether
There would be a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleString username = "steve"; String password = "blue"; Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/test?" + "user=" + username + "&password=" + password); // Sensitive Compliant SolutionString username = getEncryptedUser(); String password = getEncryptedPassword(); Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/test?" + "user=" + username + "&password=" + password); See
|
||||||||||||
java:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code ExampleThese clients from Apache commons net libraries are based on unencrypted protocols and are not recommended: TelnetClient telnet = new TelnetClient(); // Sensitive FTPClient ftpClient = new FTPClient(); // Sensitive SMTPClient smtpClient = new SMTPClient(); // Sensitive Unencrypted HTTP connections, when using okhttp library for instance, should be avoided: ConnectionSpec spec = new ConnectionSpec.Builder(ConnectionSpec.CLEARTEXT) // Sensitive .build(); Android WebView can be configured to allow a secure origin to load content from any other origin, even if that origin is insecure (mixed content): import android.webkit.WebView WebView webView = findViewById(R.id.webview) webView.getSettings().setMixedContentMode(MIXED_CONTENT_ALWAYS_ALLOW); // Sensitive Compliant SolutionUse instead these clients from Apache commons net and JSch/ssh library: JSch jsch = new JSch(); if(implicit) { // implicit mode is considered deprecated but offer the same security than explicit mode FTPSClient ftpsClient = new FTPSClient(true); } else { FTPSClient ftpsClient = new FTPSClient(); } if(implicit) { // implicit mode is considered deprecated but offer the same security than explicit mode SMTPSClient smtpsClient = new SMTPSClient(true); } else { SMTPSClient smtpsClient = new SMTPSClient(); smtpsClient.connect("127.0.0.1", 25); if (smtpsClient.execTLS()) { // commands } } Perform HTTP encrypted connections, with okhttp library for instance: ConnectionSpec spec = new ConnectionSpec.Builder(ConnectionSpec.MODERN_TLS) .build(); The most secure mode for Android WebView is import android.webkit.WebView WebView webView = findViewById(R.id.webview) webView.getSettings().setMixedContentMode(MIXED_CONTENT_NEVER_ALLOW); ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
java:S6300 |
Storing files locally is a common task for mobile applications. Files that are stored unencrypted can be read out and modified by an attacker with physical access to the device. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt’s recommended to password-encrypt local files that contain sensitive information. The class EncryptedFile can be used to easily encrypt files. Sensitive Code ExampleFiles.write(path, content); // Sensitive FileOutputStream out = new FileOutputStream(file); // Sensitive FileWriter fw = new FileWriter("outfilename", false); // Sensitive Compliant SolutionString masterKeyAlias = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC); File file = new File(context.getFilesDir(), "secret_data"); EncryptedFile encryptedFile = EncryptedFile.Builder( file, context, masterKeyAlias, EncryptedFile.FileEncryptionScheme.AES256_GCM_HKDF_4KB ).build(); // write to the encrypted file FileOutputStream encryptedOutputStream = encryptedFile.openFileOutput(); See
|
||||||||||||
java:S6301 |
When storing local data in a mobile application, it is common to use a database that can be encrypted. When encryption of this database is enabled, the encryption key must be protected properly. Why is this an issue?Mobile applications often need to store data (which might be sensitive) locally. For Android, there exist several libraries that simplify this process by offering a feature-rich database system. SQLCipher and Realm are examples of such libraries. These libraries often add support for database encryption, to protect the contents from being read by other apps or by attackers. When using encryption for such a database, it is important that the encryption key stays secret. If this key is hardcoded in the application, then it should be considered compromised. The key will be known by anyone with access to the application’s binary code or source code. This means that the sensitive encrypted data can be decrypted by anyone having access to the binary of the mobile application. Furthermore, if the key is hardcoded, it is the same for every user. A compromise of this encryption key implicates every user of the app. The encryption key is meant to stay secret and should not be hard-coded in the application as it would mean that: What is the potential impact?If an attacker is able to find the encryption key for the mobile database, this can potentially have severe consequences. Theft of sensitive dataIf a mobile database is encrypted, it is likely to contain data that is sensitive for the user or the app publisher. For example, it can contain personally identifiable information (PII), financial data, login credentials, or other sensitive user data. By not protecting the encryption key properly, it becomes very easy for an attacker to recover it and then decrypt the mobile database. At that point, the theft of sensitive data might lead to identity theft, financial fraud, and other forms of malicious activities. How to fix it in RealmCode examplesIn the example below, a local database is opened using a hardcoded key. To fix this, the key is moved to a secure location instead and retrieved
using a Noncompliant code exampleString key = "gb09ym9ydoolp3w886d0tciczj6ve9kszqd65u7d126040gwy86xqimjpuuc788g"; RealmConfiguration config = new RealmConfiguration.Builder(); .encryptionKey(key.toByteArray()) // Noncompliant .build(); Realm realm = Realm.getInstance(config); Compliant solutionRealmConfiguration config = new RealmConfiguration.Builder() .encryptionKey(getKey()) .build(); Realm realm = Realm.getInstance(config); How does this work?Using Android’s builtin key storage optionsThe Android Keystore system allows apps to store encryption keys in a container that is protected on a system level. Additionally, it can restrict when and how the keys are used. For example, it allows the app to require user authentication (for example using a fingerprint) before the key is made available. This is the recommended way to store cryptographic keys on Android. Dynamically retrieving encryption keys remotelyAs user devices are less trusted than controlled environments such as the application backend, the latter should be preferred for the storage of encryption keys. This requires that a user’s device has an internet connection, which may not be suitable for every use case. Going the extra mileAvoid storing sensitive data on user devicesIn general, it is always preferable to store as little sensitive data on user devices as possible. Of course, some sensitive data always has to be stored on client devices, such as the data required for authentication. In this case, consider whether the application logic can also function with a hash (or otherwise non-reversible form) of that data. For example, if an email address is required for authentication, it might be possible to use and store a hashed version of this address instead. ResourcesDocumentation
Standards
|
||||||||||||
java:S5693 |
Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to customize the rule with the limit values that correspond to the web application. Sensitive Code ExampleWith default limit value of 8388608 (8MB). A 100 MB file is allowed to be uploaded: @Bean(name = "multipartResolver") public CommonsMultipartResolver multipartResolver() { CommonsMultipartResolver multipartResolver = new CommonsMultipartResolver(); multipartResolver.setMaxUploadSize(104857600); // Sensitive (100MB) return multipartResolver; } @Bean(name = "multipartResolver") public CommonsMultipartResolver multipartResolver() { CommonsMultipartResolver multipartResolver = new CommonsMultipartResolver(); // Sensitive, by default if maxUploadSize property is not defined, there is no limit and thus it's insecure return multipartResolver; } @Bean public MultipartConfigElement multipartConfigElement() { MultipartConfigFactory factory = new MultipartConfigFactory(); // Sensitive, no limit by default return factory.createMultipartConfig(); } Compliant SolutionFile upload size is limited to 8 MB: @Bean(name = "multipartResolver") public CommonsMultipartResolver multipartResolver() { multipartResolver.setMaxUploadSize(8388608); // Compliant (8 MB) return multipartResolver; } See
|
||||||||||||
java:S5344 |
The improper storage of passwords poses a significant security risk to software applications. This vulnerability arises when passwords are stored in plaintext or with a fast hashing algorithm. To exploit this vulnerability, an attacker typically requires access to the stored passwords. Why is this an issue?Attackers who would get access to the stored passwords could reuse them without further attacks or with little additional effort. What is the potential impact?Plaintext or weakly hashed password storage poses a significant security risk to software applications. Unauthorized AccessWhen passwords are stored in plaintext or with weak hashing algorithms, an attacker who gains access to the password database can easily retrieve and use the passwords to gain unauthorized access to user accounts. This can lead to various malicious activities, such as unauthorized data access, identity theft, or even financial fraud. Credential ReuseMany users tend to reuse passwords across multiple platforms. If an attacker obtains plaintext or weakly hashed passwords, they can potentially use these credentials to gain unauthorized access to other accounts held by the same user. This can have far-reaching consequences, as sensitive personal information or critical systems may be compromised. Regulatory ComplianceMany industries and jurisdictions have specific regulations and standards to protect user data and ensure its confidentiality. Storing passwords in plaintext or with weak hashing algorithms can lead to non-compliance with these regulations, potentially resulting in legal consequences, financial penalties, and damage to the reputation of the software application and its developers. How to fix it in SpringCode examplesNoncompliant code exampleThe following code is vulnerable because it uses a legacy digest-based password encoding that is not considered secure. @Autowired public void configureGlobal(AuthenticationManagerBuilder auth, DataSource dataSource) throws Exception { auth.jdbcAuthentication() .dataSource(dataSource) .usersByUsernameQuery("SELECT * FROM users WHERE username = ?") .passwordEncoder(new StandardPasswordEncoder()); // Noncompliant } Compliant solution@Autowired public void configureGlobal(AuthenticationManagerBuilder auth, DataSource dataSource) throws Exception { auth.jdbcAuthentication() .dataSource(dataSource) .usersByUsernameQuery("SELECT * FROM users WHERE username = ?") .passwordEncoder(new BCryptPasswordEncoder()); } How does this work?Use secure password hashing algorithmsIn general, you should rely on an algorithm that has no known security vulnerabilities. The MD5 and SHA-1 algorithms should not be used. Some algorithms, such as the SHA family functions, are considered strong for some use cases, but are too fast in computation and therefore vulnerable to brute force attacks, especially with bruteforce-attack-oriented hardware. To protect passwords, it is therefore important to choose modern, slow password-hashing algorithms. The following algorithms are, in order of strength, the most secure password hashing algorithms to date:
Argon2 should be the best choice, and others should be used when the previous one is not available. For systems that must use FIPS-140-certified algorithms, PBKDF2 should be used. Whenever possible, choose the strongest algorithm available. If the algorithm currently used by your system should be upgraded, OWASP documents possible upgrade methods here: Upgrading Legacy Hashes. In the previous example, the Never store passwords in plaintextA user password should never be stored in plaintext. Instead, a hash should be produced from it using a secure algorithm. When dealing with password storage security, best practices recommend relying on a slow hashing algorithm, that will make brute force attacks more difficult. Using a hashing function with adaptable computation and memory complexity also is recommended to be able to increase the security level with time. Adding a salt to the digest computation is also recommended to prevent pre-computed table attacks (see rule S2053). PitfallsPre-hashing passwordsAs bcrypt has a maximum length input length of 72 bytes for most implementations, some developers may be tempted to pre-hash the password with a stronger algorithm before hashing it with bcrypt. Pre-hashing passwords with bcrypt is not recommended as it can lead to a specific range of issues. Using a strong salt and a high number of rounds is enough to protect the password. More information about this can be found here: Pre-hashing Passwords with Bcrypt. ResourcesDocumentation
Standards |
||||||||||||
java:S6432 |
When encrypting data using AES-GCM or AES-CCM, it is essential not to reuse the same initialization vector (IV, also called nonce) with a given key. To prevent this, it is recommended to either randomize the IV for each encryption or increment the IV after each encryption. Why is this an issue?When encrypting data using a counter (CTR) derived block cipher mode of operation, it is essential not to reuse the same initialization vector (IV) for a given key. An IV that complies with this requirement is called a "nonce" (number used once). Galois/Counter (GCM) and Counter with Cipher Block Chaining-Message Authentication Code (CCM) are both derived from counter mode. When using AES-GCM or AES-CCM, a given key and IV pair will create a "keystream" that is used to encrypt a plaintext (original content) into a ciphertext (encrypted content.) For any key and IV pair, this keystream is always deterministic. Because of this property, encrypting several plaintexts with one key and IV pair can be catastrophic. If an attacker has access to one plaintext and its associated ciphertext, they are able to decrypt everything that was created using the same pair. Additionally, IV reuse also drastically decreases the key recovery computational complexity by downgrading it to a simpler polynomial root-finding problem. This means that even without access to a plaintext/ciphertext pair, an attacker may still be able to decrypt all the sensitive data. What is the potential impact?If the encryption that is being used is flawed, attackers might be able to exploit it in several ways. They might be able to decrypt existing sensitive data or bypass key protections. Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability. Theft of sensitive dataThe encrypted message might contain data that is considered sensitive and should not be known to third parties. By not using the encryption algorithm correctly, the likelihood that an attacker might be able to recover the original sensitive data drastically increases. Additional attack surfaceEncrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. If an attacker is able to modify the cleartext of the encrypted message, it might be possible to trigger other vulnerabilities in the code. How to fix it in Java Cryptography ExtensionCode examplesThe example uses a hardcoded IV as a nonce, which causes AES-CCM to be insecure. To fix it, a nonce is randomly generated instead. Noncompliant code examplepublic void encrypt(byte[] key, byte[] ptxt) { byte[] nonce = "7cVgr5cbdCZV".getBytes("UTF-8"); Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding"); SecretKeySpec keySpec = new SecretKeySpec(key, "AES"); GCMParameterSpec gcmSpec = new GCMParameterSpec(128, nonce); cipher.init(Cipher.ENCRYPT_MODE, keySpec, gcmSpec); // Noncompliant } Compliant solutionpublic void encrypt(byte[] key, byte[] ptxt) { SecureRandom random = new SecureRandom(); byte[] nonce = new byte[12]; random.nextBytes(nonce); Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding"); SecretKeySpec keySpec = new SecretKeySpec(key, "AES"); GCMParameterSpec gcmSpec = new GCMParameterSpec(128, nonce); cipher.init(Cipher.ENCRYPT_MODE, keySpec, gcmSpec); } How does this work?For AES-GCM and AES-CCM, NIST recommends generating a nonce using either a deterministic approach or using a 'Random Bit Generator (RBG)'. Generating nonces using random number generationWhen using a randomized approach, NIST recommends a nonce of at least 96 bits using a cryptographically secure pseudorandom number generator (CSPRNG.) Such a generator can create output with a sufficiently low probability of the same number being output twice (also called a collision) for a long time. However, after 232 generated numbers for the same key, NIST recommends rotating this key for a new one. After that amount of generated numbers, the probability of a collision is high enough to be considered insecure. The code example above demonstrates how CSPRNGs can be used to generate nonces. Be careful to use a random number generator that is sufficiently secure. Default (non-cryptographically secure) RNGs might be more prone to collisions in their output, which is catastrophic for counter-based encryption modes. Deterministically generating noncesOne method to prevent the same IV from being used multiple times for the same key is to update the IV in a deterministic way after each encryption. The most straightforward deterministic method for this is a counter. The way this works is simple: for any key, the first IV is the number zero. After this IV is used to encrypt something with a key, it is incremented for that key (and is now equal to 1). Although this requires additional bookkeeping, it should guarantee that for each encryption key, an IV is never repeated. For a secure implementation, NIST suggests generating these nonces in two parts: a fixed field and an invocation field. The fixed field should be used to identify the device executing the encryption (for example, it could contain a device ID), such that for one key, no two devices can generate the same nonce. The invocation field contains the counter as described above. For a 96-bit nonce, NIST recommends (but does not require) using a 32-bit fixed field and a 64-bit invocation field. Additional details can be found in the NIST Special Publication 800-38D. ResourcesStandards
|
||||||||||||
java:S6437 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. What is the potential impact?The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. Application’s security downgradeA downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component. For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements. How to fix itRevoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Use a secret vault A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. Code examplesThe following code example is noncompliant because it uses a hardcoded secret value. Noncompliant code exampleimport org.h2.security.SHA256; String inputString = "s3cr37"; byte[] key = inputString.getBytes(); SHA256.getHMAC(key, message); // Noncompliant Compliant solutionimport org.h2.security.SHA256; String inputString = System.getenv("SECRET"); byte[] key = inputString.getBytes(); SHA256.getHMAC(key, message); // Noncompliant How does this work?While the noncompliant code example contains a hard-coded password, the compliant solution retrieves the secret’s value from its environment. This allows to have an environment-dependent secret value and avoids storing the password in the source code itself. Depending on the application and its underlying infrastructure, how the secret gets added to the environment might change. ResourcesDocumentation
Standards |
||||||||||||
java:S2077 |
Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplepublic User getUser(Connection con, String user) throws SQLException { Statement stmt1 = null; Statement stmt2 = null; PreparedStatement pstmt; try { stmt1 = con.createStatement(); ResultSet rs1 = stmt1.executeQuery("GETDATE()"); // No issue; hardcoded query stmt2 = con.createStatement(); ResultSet rs2 = stmt2.executeQuery("select FNAME, LNAME, SSN " + "from USERS where UNAME=" + user); // Sensitive pstmt = con.prepareStatement("select FNAME, LNAME, SSN " + "from USERS where UNAME=" + user); // Sensitive ResultSet rs3 = pstmt.executeQuery(); //... } public User getUserHibernate(org.hibernate.Session session, String data) { org.hibernate.Query query = session.createQuery( "FROM students where fname = " + data); // Sensitive // ... } Compliant Solutionpublic User getUser(Connection con, String user) throws SQLException { Statement stmt1 = null; PreparedStatement pstmt = null; String query = "select FNAME, LNAME, SSN " + "from USERS where UNAME=?" try { stmt1 = con.createStatement(); ResultSet rs1 = stmt1.executeQuery("GETDATE()"); pstmt = con.prepareStatement(query); pstmt.setString(1, user); // Good; PreparedStatements escape their inputs. ResultSet rs2 = pstmt.executeQuery(); //... } } public User getUserHibernate(org.hibernate.Session session, String data) { org.hibernate.Query query = session.createQuery("FROM students where fname = ?"); query = query.setParameter(0,data); // Good; Parameter binding escapes all input org.hibernate.Query query2 = session.createQuery("FROM students where fname = " + data); // Sensitive // ... See
|
||||||||||||
java:S4347 |
Cryptographic operations often rely on unpredictable random numbers to enhance security. These random numbers are created by cryptographically secure pseudo-random number generators (CSPRNG). It is important not to use a predictable seed with these random number generators otherwise the random numbers will also become predictable. Why is this an issue?Random number generators are often used to generate random values for cryptographic algorithms. When a random number generator is used for cryptographic purposes, the generated numbers must be as random and unpredictable as possible. When the random number generator is improperly seeded with a constant or a predictable value, its output will also be predictable. This can have severe security implications for cryptographic operations that rely on the randomness of the generated numbers. By using a predictable seed, an attacker can potentially guess or deduce the generated numbers, compromising the security of whatever cryptographic algorithm relies on the random number generator. What is the potential impact?It is crucial to understand that the strength of cryptographic algorithms heavily relies on the quality of the random numbers used. By improperly seeding a CSPRNG, we introduce a significant weakness that can be exploited by attackers. Insecure cryptographic keysOne of the primary use cases for CSPRNGs is generating cryptographic keys. If an attacker can predict the seed used to initialize the random number generator, they may be able to derive the same keys. Depending on the use case, this can lead to multiple severe outcomes, such as:
Session hijacking and man-in-the-middle attackAnother scenario where this vulnerability can be exploited is in the generation of session tokens or nonces for secure communication protocols. If an attacker can predict the seed used to generate these tokens, they can impersonate legitimate users or intercept sensitive information. How to fix it in Java SECode examplesThe following code uses a cryptographically strong random number generator to generate data that is not cryptographically strong. Noncompliant code exampleSecureRandom sr = new SecureRandom(); sr.setSeed(123456L); // Noncompliant int v = sr.next(32); SecureRandom sr = new SecureRandom("abcdefghijklmnop".getBytes("us-ascii")); // Noncompliant int v = sr.next(32); Compliant solutionSecureRandom sr = new SecureRandom(); int v = sr.next(32); This solution is available for JDK 1.8 and higher. SecureRandom sr = SecureRandom.getInstanceStrong(); int v = sr.next(32); How does this work?When the randomly generated data needs to be cryptographically strong, To go the extra mile, If the randomly generated data is not used for cryptographic purposes and is not business critical, it may be a better choice to use
ResourcesDocumentation
Standards
|
||||||||||||
java:S5679 |
The Security Assertion Markup Language (SAML) is a widely used standard in single sign-on systems. In a simplified version, the user authenticates to an Identity Provider which generates a signed SAML Response. This response is then forwarded to a Service Provider for validation and authentication. Why is this an issue?If the Service Provider does not manage to properly validate the incoming SAML response message signatures, attackers might be able to manipulate the response content without the application noticing. Especially, they might be able to alter the authentication-targeted user. What is the potential impact?By exploiting this vulnerability, an attacker can manipulate the SAML Response to impersonate a different user. This, in turn, can have various consequences on the application’s security. Unauthorized AccessExploiting this vulnerability allows an attacker with authenticated access to impersonate other users within the SAML-based SSO system. This can lead to unauthorized access to sensitive information, resources, or functionalities the attacker should not have. By masquerading as legitimate users, the attacker can bypass authentication mechanisms and gain unauthorized privileges, potentially compromising the entire system. By impersonating a user with higher privileges, the attacker can gain access to additional resources. Privilege escalation can lead to further compromise of other systems and unauthorized access to critical infrastructure. Data BreachesWith the ability to impersonate other users, an attacker can gain access to sensitive data stored within the SAML-based SSO system. This includes personally identifiable information (PII), financial data, intellectual property, or any other confidential information. Data breaches can result in reputational damage, legal consequences, financial losses, and harm to individuals whose data is exposed. How to fix it in SpringCode examplesThe following code examples are vulnerable because they explicitly include comments in signature checks. An attacker is able to change the field identifying the authenticated user with XML comments. Noncompliant code exampleimport org.opensaml.xml.parse.StaticBasicParserPool; import org.opensaml.xml.parse.ParserPool; public ParserPool parserPool() { StaticBasicParserPool staticBasicParserPool = new StaticBasicParserPool(); staticBasicParserPool.setIgnoreComments(false); // Noncompliant return staticBasicParserPool; } import org.opensaml.xml.parse.BasicParserPool; import org.opensaml.xml.parse.ParserPool; public ParserPool parserPool() { BasicParserPool basicParserPool = new BasicParserPool(); basicParserPool.setIgnoreComments(false); // Noncompliant return basicParserPool; } Compliant solutionimport org.opensaml.xml.parse.StaticBasicParserPool; import org.opensaml.xml.parse.ParserPool; public ParserPool parserPool() { return new StaticBasicParserPool(); } import org.opensaml.xml.parse.BasicParserPool; import org.opensaml.xml.parse.ParserPool; public ParserPool parserPool() { return new BasicParserPool(); } ResourcesDocumentation
Articles & blog posts
Standards |
||||||||||||
java:S5322 |
Android applications can receive broadcasts from the system or other applications. Receiving intents is security-sensitive. For example, it has led in the past to the following vulnerabilities: Receivers can be declared in the manifest or in the code to make them context-specific. If the receiver is declared in the manifest Android will start the application if it is not already running once a matching broadcast is received. The receiver is an entry point into the application. Other applications can send potentially malicious broadcasts, so it is important to consider broadcasts as untrusted and to limit the applications that can send broadcasts to the receiver. Permissions can be specified to restrict broadcasts to authorized applications. Restrictions can be enforced by both the sender and receiver of a broadcast. If permissions are specified when registering a broadcast receiver, then only broadcasters who were granted this permission can send a message to the receiver. This rule raises an issue when a receiver is registered without specifying any broadcast permission. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesRestrict the access to broadcasted intents. See the Android documentation for more information. Sensitive Code Exampleimport android.content.BroadcastReceiver; import android.content.Context; import android.content.IntentFilter; import android.os.Build; import android.os.Handler; import android.support.annotation.RequiresApi; public class MyIntentReceiver { @RequiresApi(api = Build.VERSION_CODES.O) public void register(Context context, BroadcastReceiver receiver, IntentFilter filter, String broadcastPermission, Handler scheduler, int flags) { context.registerReceiver(receiver, filter); // Sensitive context.registerReceiver(receiver, filter, flags); // Sensitive // Broadcasting intent with "null" for broadcastPermission context.registerReceiver(receiver, filter, null, scheduler); // Sensitive context.registerReceiver(receiver, filter, null, scheduler, flags); // Sensitive } } Compliant Solutionimport android.content.BroadcastReceiver; import android.content.Context; import android.content.IntentFilter; import android.os.Build; import android.os.Handler; import android.support.annotation.RequiresApi; public class MyIntentReceiver { @RequiresApi(api = Build.VERSION_CODES.O) public void register(Context context, BroadcastReceiver receiver, IntentFilter filter, String broadcastPermission, Handler scheduler, int flags) { context.registerReceiver(receiver, filter, broadcastPermission, scheduler); context.registerReceiver(receiver, filter, broadcastPermission, scheduler, flags); } } See
|
||||||||||||
java:S5689 |
Disclosure of version information, usually overlooked by developers but disclosed by default by the systems and frameworks in use, can pose a significant security risk depending on the production environement. Once this information is public, attackers can use it to identify potential security holes or vulnerabilities specific to that version. Furthermore, if the published version information indicates the use of outdated or unsupported software, it becomes easier for attackers to exploit known vulnerabilities. They can search for published vulnerabilities related to that version and launch attacks that specifically target those vulnerabilities. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesIn general, it is recommended to keep internal technical information within internal systems to control what attackers know about the underlying architectures. This is known as the "need to know" principle. The most effective solution is to remove version information disclosure from what end users can see, such as the "x-powered-by" header. Disabling the server signature provides additional protection by reducing the amount of information available to attackers. Note, however, that
this does not provide as much protection as regular updates and patches. Sensitive Code Example@GetMapping(value = "/example") public ResponseEntity<String> example() { HttpHeaders responseHeaders = new HttpHeaders(); responseHeaders.set("x-powered-by", "myproduct"); // Sensitive return new ResponseEntity<String>( "example", responseHeaders, HttpStatus.CREATED); } Compliant SolutionDo not disclose version information unless necessary. The See
|
||||||||||||
java:S5324 |
Storing data locally is a common task for mobile applications. Such data includes files among other things. One convenient way to store files is to use the external file storage which usually offers a larger amount of disc space compared to internal storage. Files created on the external storage are globally readable and writable. Therefore, a malicious application having the permissions
External storage can also be removed by the user (e.g when based on SD card) making the files unavailable to the application. Ask Yourself WhetherYour application uses external storage to:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleimport android.content.Context; public class AccessExternalFiles { public void accessFiles(Context context) { context.getExternalFilesDir(null); // Sensitive } } Compliant Solutionimport android.content.Context; public class AccessExternalFiles { public void accessFiles(Context context) { context.getFilesDir(); } } See
|
||||||||||||
java:S5443 |
Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like
In the past, it has led to the following vulnerabilities: This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Examplenew File("/tmp/myfile.txt"); // Sensitive Paths.get("/tmp/myfile.txt"); // Sensitive java.io.File.createTempFile("prefix", "suffix"); // Sensitive, will be in the default temporary-file directory. java.nio.file.Files.createTempDirectory("prefix"); // Sensitive, will be in the default temporary-file directory. Map<String, String> env = System.getenv(); env.get("TMP"); // Sensitive Compliant Solutionnew File("/myDirectory/myfile.txt"); // Compliant File.createTempFile("prefix", "suffix", new File("/mySecureDirectory")); // Compliant if(SystemUtils.IS_OS_UNIX) { FileAttribute<Set<PosixFilePermission>> attr = PosixFilePermissions.asFileAttribute(PosixFilePermissions.fromString("rwx------")); Files.createTempFile("prefix", "suffix", attr); // Compliant } else { File f = Files.createTempFile("prefix", "suffix").toFile(); // Compliant f.setReadable(true, true); f.setWritable(true, true); f.setExecutable(true, true); } See
|
||||||||||||
java:S5445 |
Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic. Why is this an issue?Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it. In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues. What is the potential impact?Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it. Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise. Information disclosureBecause attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive. For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements. Attack surface extensionAn application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise. For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over. How to fix itCode examplesThe following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function. Noncompliant code exampleimport java.io.File; import java.io.IOException; protected void Example() throws IOException { File tempDir; tempDir = File.createTempFile("", "."); tempDir.delete(); tempDir.mkdir(); // Noncompliant } Compliant solutionimport java.io.IOException; import java.nio.file.Files; import java.nio.file.Path; protected void Example() throws IOException { Path tempPath = Files.createTempDirectory(""); File tempDir = tempPath.toFile(); } How does this work?Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks. Use a secure API functionTemporary files handling APIs generally provide secure functions to create temporary files. In most cases, they operate in an atomical way, creating and opening a file with a unique and unpredictable name in a single call. Those functions can often be used to replace less secure alternatives without requiring important development efforts. Here, the example compliant code uses the safer Strong security controlsTemporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose. In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:
Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them. ResourcesDocumentation
Standards |
||||||||||||
java:S6418 |
Because it is easy to extract strings from an application source code or binary, secrets should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Secrets should be stored outside of the source code in a configuration file or a management service for secrets. This rule detects variables/fields having a name matching a list of words (secret, token, credential, auth, api[_.-]?key) being assigned a pseudorandom hard-coded value. The pseudorandomness of the hard-coded value is based on its entropy and the probability to be human-readable. The randomness sensibility can be adjusted if needed. Lower values will detect less random values, raising potentially more false positives. Ask Yourself Whether
There would be a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code Exampleprivate static final String MY_SECRET = "47828a8dd77ee1eb9dde2d5e93cb221ce8c32b37"; public static void main(String[] args) { MyClass.callMyService(MY_SECRET); } Compliant SolutionUsing AWS Secrets Manager: import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueRequest; import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse; public static void main(String[] args) { SecretsManagerClient secretsClient = ... MyClass.doSomething(secretsClient, "MY_SERVICE_SECRET"); } public static void doSomething(SecretsManagerClient secretsClient, String secretName) { GetSecretValueRequest valueRequest = GetSecretValueRequest.builder() .secretId(secretName) .build(); GetSecretValueResponse valueResponse = secretsClient.getSecretValue(valueRequest); String secret = valueResponse.secretString(); // do something with the secret MyClass.callMyService(secret); } Using Azure Key Vault Secret: import com.azure.identity.DefaultAzureCredentialBuilder; import com.azure.security.keyvault.secrets.SecretClient; import com.azure.security.keyvault.secrets.SecretClientBuilder; import com.azure.security.keyvault.secrets.models.KeyVaultSecret; public static void main(String[] args) throws InterruptedException, IllegalArgumentException { String keyVaultName = System.getenv("KEY_VAULT_NAME"); String keyVaultUri = "https://" + keyVaultName + ".vault.azure.net"; SecretClient secretClient = new SecretClientBuilder() .vaultUrl(keyVaultUri) .credential(new DefaultAzureCredentialBuilder().build()) .buildClient(); MyClass.doSomething(secretClient, "MY_SERVICE_SECRET"); } public static void doSomething(SecretClient secretClient, String secretName) { KeyVaultSecret retrievedSecret = secretClient.getSecret(secretName); String secret = retrievedSecret.getValue(), // do something with the secret MyClass.callMyService(secret); } See
|
||||||||||||
java:S2053 |
This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes. Why is this an issue?During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords. However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital. What is the potential impact?Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need. Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster. If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once. A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before. With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred. ExceptionsTo securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:
When they are used for password storage, using a secure, random salt is required. However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted. How to fix it in Java SECode examplesThe following code contains examples of hard-coded salts. Noncompliant code exampleimport javax.crypto.spec.PBEParameterSpec; public void hash() { byte[] salt = "salty".getBytes(); PBEParameterSpec cipherSpec = new PBEParameterSpec(salt, 10000); // Noncompliant } Compliant solutionimport java.security.SecureRandom; import javax.crypto.spec.PBEParameterSpec; public void hash() { SecureRandom random = new SecureRandom(); byte[] salt = new byte[16]; random.nextBytes(salt); PBEParameterSpec cipherSpec = new PBEParameterSpec(salt, 10000); } How does this work?This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards. Here, the compliant code example ensures the salt is random and has a sufficient length by calling the ResourcesStandards |
||||||||||||
java:S5320 |
In Android applications, broadcasting intents is security-sensitive. For example, it has led in the past to the following vulnerability: By default, broadcasted intents are visible to every application, exposing all sensitive information they contain. This rule raises an issue when an intent is broadcasted without specifying any "receiver permission". Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesRestrict the access to broadcasted intents. See Android documentation for more information. Sensitive Code Exampleimport android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.os.Build; import android.os.Bundle; import android.os.Handler; import android.os.UserHandle; import android.support.annotation.RequiresApi; public class MyIntentBroadcast { @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR1) public void broadcast(Intent intent, Context context, UserHandle user, BroadcastReceiver resultReceiver, Handler scheduler, int initialCode, String initialData, Bundle initialExtras, String broadcastPermission) { context.sendBroadcast(intent); // Sensitive context.sendBroadcastAsUser(intent, user); // Sensitive // Broadcasting intent with "null" for receiverPermission context.sendBroadcast(intent, null); // Sensitive context.sendBroadcastAsUser(intent, user, null); // Sensitive context.sendOrderedBroadcast(intent, null); // Sensitive context.sendOrderedBroadcastAsUser(intent, user, null, resultReceiver, scheduler, initialCode, initialData, initialExtras); // Sensitive } } Compliant Solutionimport android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.os.Build; import android.os.Bundle; import android.os.Handler; import android.os.UserHandle; import android.support.annotation.RequiresApi; public class MyIntentBroadcast { @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR1) public void broadcast(Intent intent, Context context, UserHandle user, BroadcastReceiver resultReceiver, Handler scheduler, int initialCode, String initialData, Bundle initialExtras, String broadcastPermission) { context.sendBroadcast(intent, broadcastPermission); context.sendBroadcastAsUser(intent, user, broadcastPermission); context.sendOrderedBroadcast(intent, broadcastPermission); context.sendOrderedBroadcastAsUser(intent, user,broadcastPermission, resultReceiver, scheduler, initialCode, initialData, initialExtras); } } See
|
||||||||||||
java:S4036 |
When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s Ask Yourself Whether
There is a risk if you answered yes to this question. Recommended Secure Coding PracticesFully qualified/absolute path should be used to specify the OS command to execute. Sensitive Code ExampleThe full path of the command is not specified and thus the executable will be searched in all directories listed in the Runtime.getRuntime().exec("make"); // Sensitive Runtime.getRuntime().exec(new String[]{"make"}); // Sensitive ProcessBuilder builder = new ProcessBuilder("make"); // Sensitive builder.command("make"); // Sensitive Compliant SolutionThe command is defined by its full path: Runtime.getRuntime().exec("/usr/bin/make"); // Compliant Runtime.getRuntime().exec(new String[]{"~/bin/make"}); // Compliant ProcessBuilder builder = new ProcessBuilder("./bin/make"); // Compliant builder.command("../bin/make"); // Compliant builder.command(Arrays.asList("..\bin\make", "-j8")); // Compliant builder = new ProcessBuilder(Arrays.asList(".\make")); // Compliant builder.command(Arrays.asList("C:\bin\make", "-j8")); // Compliant builder.command(Arrays.asList("\\SERVER\bin\make")); // Compliant See |
||||||||||||
java:S2092 |
When a cookie is protected with the Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleIf you create a security-sensitive cookie in your JAVA code: Cookie c = new Cookie(COOKIENAME, sensitivedata); c.setSecure(false); // Sensitive: a security-ensitive cookie is created with the secure flag set to false By default the Cookie c = new Cookie(COOKIENAME, sensitivedata); // Sensitive: a security-sensitive cookie is created with the secure flag not defined (by default set to false) Compliant SolutionCookie c = new Cookie(COOKIENAME, sensitivedata); c.setSecure(true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag set to true See
|
||||||||||||
java:S5122 |
Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities: Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Sensitive Code ExampleJava servlet framework: @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { resp.setHeader("Content-Type", "text/plain; charset=utf-8"); resp.setHeader("Access-Control-Allow-Origin", "*"); // Sensitive resp.setHeader("Access-Control-Allow-Credentials", "true"); resp.setHeader("Access-Control-Allow-Methods", "GET"); resp.getWriter().write("response"); } Spring MVC framework: @CrossOrigin // Sensitive @RequestMapping("") public class TestController { public String home(ModelMap model) { model.addAttribute("message", "ok "); return "view"; } } CorsConfiguration config = new CorsConfiguration(); config.addAllowedOrigin("*"); // Sensitive config.applyPermitDefaultValues(); // Sensitive class Insecure implements WebMvcConfigurer { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**") .allowedOrigins("*"); // Sensitive } } User-controlled origin: public ResponseEntity<String> userControlledOrigin(@RequestHeader("Origin") String origin) { HttpHeaders responseHeaders = new HttpHeaders(); responseHeaders.add("Access-Control-Allow-Origin", origin); // Sensitive return new ResponseEntity<>("content", responseHeaders, HttpStatus.CREATED); } Compliant SolutionJava Servlet framework: @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { resp.setHeader("Content-Type", "text/plain; charset=utf-8"); resp.setHeader("Access-Control-Allow-Origin", "trustedwebsite.com"); // Compliant resp.setHeader("Access-Control-Allow-Credentials", "true"); resp.setHeader("Access-Control-Allow-Methods", "GET"); resp.getWriter().write("response"); } Spring MVC framework: @CrossOrigin("trustedwebsite.com") // Compliant @RequestMapping("") public class TestController { public String home(ModelMap model) { model.addAttribute("message", "ok "); return "view"; } } CorsConfiguration config = new CorsConfiguration(); config.addAllowedOrigin("http://domain2.com"); // Compliant class Safe implements WebMvcConfigurer { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**") .allowedOrigins("safe.com"); // Compliant } } User-controlled origin validated with an allow-list: public ResponseEntity<String> userControlledOrigin(@RequestHeader("Origin") String origin) { HttpHeaders responseHeaders = new HttpHeaders(); if (trustedOrigins.contains(origin)) { responseHeaders.add("Access-Control-Allow-Origin", origin); } return new ResponseEntity<>("content", responseHeaders, HttpStatus.CREATED); } See
|
||||||||||||
java:S5247 |
To reduce the risk of cross-site scripting attacks, templating systems, such as Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy
(which only transforms html characters into html entities) will not be relevant
when variables are used in a html attribute because ' <a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie) <a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack) Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesEnable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one. Sensitive Code ExampleWith JMustache by samskivert: Mustache.compiler().escapeHTML(false).compile(template).execute(context); // Sensitive Mustache.compiler().withEscaper(Escapers.NONE).compile(template).execute(context); // Sensitive With Freemarker: freemarker.template.Configuration configuration = new freemarker.template.Configuration(); configuration.setAutoEscapingPolicy(DISABLE_AUTO_ESCAPING_POLICY); // Sensitive Compliant SolutionWith JMustache by samskivert: Mustache.compiler().compile(template).execute(context); // Compliant, auto-escaping is enabled by default Mustache.compiler().escapeHTML(true).compile(template).execute(context); // Compliant With Freemarker. See "setAutoEscapingPolicy" documentation for more details. freemarker.template.Configuration configuration = new freemarker.template.Configuration(); configuration.setAutoEscapingPolicy(ENABLE_IF_DEFAULT_AUTO_ESCAPING_POLICY); // Compliant See
|
||||||||||||
docker:S4423 |
This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext. Why is this an issue?Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:
When selecting encryption algorithms, tools, or combinations, you should also consider two things:
For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community. To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:
When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means. What is the potential impact?After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect. Depending on the recovered data, the impact may vary. Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability. Additional attack surfaceBy modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can
further exploit a system to obtain more information. Breach of confidentiality and privacyWhen encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems. In this scenario, the company, its employees, users, and partners could be seriously affected. The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data. Legal and compliance issuesIn many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws. How to fix it in cURLCode examplesNoncompliant code exampleFROM ubuntu:22.04 # Noncompliant RUN curl --tlsv1.0 -O https://tlsv1-0.example.com/downloads/install.sh Compliant solutionFROM ubuntu:22.04 RUN curl --tlsv1.2 -O https://tlsv1-3.example.com/downloads/install.sh How does this work?As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community. The best choices at the moment are the following. Use TLS v1.2 or TLS v1.3Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community. The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support. The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure. On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance. ResourcesArticles & blog posts
Standards |
||||||||||||
docker:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code ExampleRUN curl http://www.example.com/ Compliant SolutionRUN curl https://www.example.com/ See |
||||||||||||
docker:S6469 |
For mounts types Why is this an issue?Docker offers a feature to mount files and directories for specific The For
If the What is the potential impact?Unauthorized accessThe unintended audience can exploit the leaked private key or equivalent to authenticate themselves as the legitimate owner, gaining unauthorized entry to systems, servers, or accounts that accept the key for authentication. This unauthorized access opens the door for various malicious activities, including data breaches, unauthorized modifications, and misuse of sensitive information. How to fix itCode examplesNoncompliant code example# Noncompliant RUN --mount=type=secret,id=build_secret,mode=0777 ./installer.sh Compliant solutionRUN --mount=type=secret,id=build_secret,mode=0700 ./installer.sh How does this work?In general, always follow the least privilege principle, and set the In case you made this change because you need to access secrets or agents as a low-privileged user, you can use the options ResourcesDocumentation
Standards |
||||||||||||
docker:S6502 |
Disabling builder sandboxes can lead to unauthorized access of the host system by malicious programs. By default, programs executed by a If you disable the sandbox with the This vulnerability allows an attacker who controls the behavior of the ran command to access the host system, break out of the container and penetrate the infrastructure. After a successful intrusion, the underlying systems are exposed to:
Ask Yourself Whether
There is a risk if you answered yes to either of these questions. Recommended Secure Coding Practices
Sensitive Code Example# syntax=docker/dockerfile:1-labs FROM ubuntu:22.04 # Sensitive RUN --security=insecure ./example.sh Compliant Solution# syntax=docker/dockerfile:1-labs FROM ubuntu:22.04 RUN ./example.sh RUN --security=sandbox ./example.sh See |
||||||||||||
docker:S6504 |
Ownership or write permissions for a file or directory copied to the Docker image have been assigned to a user other than root. Write permissions enable malicious actors, who have a foothold on the container, to tamper with the resource and thus potentially manipulate the
container’s expected behavior. This also breaches the container immutability principle as it facilitates container changes during its life. Immutability, a container best practice, allows for a more reliable and reproducible behavior of Docker containers. If a user is given ownership on a file but no write permissions, the user can still modify it by using his ownership to change the file permissions first. This is why both ownership and write permissions should be avoided. Ask Yourself Whether
There is a risk if you answered yes to any of these questions. Recommended Secure Coding Practices
Sensitive Code ExampleFROM example RUN useradd exampleuser # Sensitive COPY --chown=exampleuser:exampleuser src.py dst.py Compliant SolutionFROM example COPY --chown=root:root --chmod=755 src.py dst.py See
|
||||||||||||
docker:S6505 |
When installing dependencies, package managers like Ask Yourself Whether
There is a risk if you answered no to the question. Recommended Secure Coding PracticesExecution of third-party scripts should be disabled if not strictly necessary for dependencies to work correctly. Doing this will reduce the attack surface and block a well-known supply chain attack vector. Commands that are subject to this issue are: Sensitive Code ExampleFROM node:latest # Sensitive RUN npm install FROM node:latest # Sensitive RUN yarn install Compliant SolutionFROM node:latest RUN npm install --ignore-scripts FROM node:latest RUN yarn install --ignore-scripts See
|
||||||||||||
docker:S4507 |
Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not enable debugging features on production servers or applications distributed to end users. Sensitive Code ExampleFROM example # Sensitive ENV APP_DEBUG=true # Sensitive ENV ENV=development CMD /run.sh Compliant SolutionFROM example ENV APP_DEBUG=false ENV ENV=production CMD /run.sh See |
||||||||||||
docker:S4830 |
This vulnerability makes it possible that an encrypted communication is intercepted. Why is this an issue?Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be. When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted. What is the potential impact?Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats. Identity spoofingIf a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches. Loss of data integrityWhen TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system. How to fix itCode examplesThe following code contains examples of disabled certificate validation. Noncompliant code exampleFROM ubuntu:22.04 # Noncompliant RUN curl --insecure -O https://expired.example.com/downloads/install.sh Compliant solutionFROM ubuntu:22.04 RUN curl -O https://new.example.com/downloads/install.sh How does this work?Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation. To avoid running into problems with invalid certificates, consider the following sections. Using trusted certificatesIf possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration. Working with self-signed certificates or non-standard CAsIn some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store. ResourcesStandards
|
||||||||||||
docker:S6437 |
Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience. Why is this an issue?In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources. The trust issue can be more or less severe depending on the people’s role and entitlement. In Dockerfiles, hard-coded secrets and secrets passed through as variables or created at build-time will cause security risks. The secret information can be exposed either via the container environment, the image metadata, or the build environment logs. What is the potential impact?The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered. Financial lossFinancial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected. This additional use of the secret will lead to added costs with the service provider. Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users. Application’s security downgradeA downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component. For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements. How to fix itBest practices recommend using a secret vault for all secrets that must be accessed at container runtime. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available. For all secrets that must be accessed at image build time, it is recommended to rely on Docker Buildkit’s secret mount options. This will prevent secrets from being disclosed in image’s metadata and build logs. Additionally, investigations and remediation actions should be conducted to ensure the current and future security of the infrastructure. Revoke the secret Revoke any leaked secrets and remove them from the application source code. Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked. Analyze recent secret use When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent. This operation should be part of a global incident response process. Code examplesNoncompliant code exampleThe following code sample generates a new SSH private key that will be stored in the generated image. This key should be considered as compromised. Moreover, the SSH key encryption passphrase is also hardcoded. FROM example # Noncompliant RUN ssh-keygen -N "passphrase" -t rsa -b 2048 -f /etc/ssh/rsa_key RUN /example.sh --ssh /etc/ssh/rsa_key The following code sample uses a seemingly hidden password which is actually leaked in the image metadata after the build. FROM example ARG PASSWORD # Noncompliant RUN wget --user=guest --password="$PASSWORD" https://example.com Compliant solutionFROM example RUN --mount=type=secret,id=ssh,target=/etc/ssh/rsa_key \ /example.sh --ssh /etc/ssh/rsa_key FROM example RUN --mount=type=secret,id=wget,target=/home/user/.wgetrc \ wget --user=guest https://example.com For runtime secrets, best practices recommend relying on a vault service to pass secret information to the containers. Docker environment provides Swarm services that implement such a feature. If such an option can not be considered, store the runtime secrets in an environment file
such as docker run --env-file .env myImage It is then important to ensure that the environment files are securely stored and generated. ResourcesDocumentation
Standards |
||||||||||||
docker:S6500 |
Installing recommended packages automatically can lead to vulnerabilities in the Docker image. Potentially unnecessary packages are installed via a known Debian package manager. These packages will increase the attack surface of the created
container as they might contain unidentified vulnerabilities or malicious code. Those packages could be used as part of a broader supply chain attack.
In general, the more packages are installed in a container, the weaker its security posture is. To be secure, remove unused packages where possible and ensure images are subject to routine vulnerability scans. Ask Yourself Whether
There is a risk if you answered yes to the question. Recommended Secure Coding Practices
Sensitive Code ExampleFROM ubuntu:22.04 # Sensitive RUN apt install -y build-essential # Sensitive RUN apt-get install -y build-essential # Sensitive RUN aptitude install -y build-essential Compliant SolutionFROM ubuntu:22.04 RUN apt --no-install-recommends install -y build-essential RUN apt-get --no-install-recommends install -y build-essential RUN aptitude --without-recommends install -y build-essential See
|
||||||||||||
docker:S6506 |
The usage of HTTPS is not enforced here. As it is possible for the HTTP client to follow redirects, such redirects might lead to websites using HTTP. As HTTP is a clear-text protocol, it is considered insecure. Due to its lack of encryption, attackers that are able to sniff traffic from the network can read, modify, or corrupt the transported content. Therefore, allowing redirects to HTTP can lead to several risks:
Even in isolated networks, such as segmented cloud or offline environments, it is important to ensure the usage of HTTPS. If not, then insider threats with access to these environments might still be able to monitor or tamper with communications. Ask Yourself Whether
There is a risk if you answered yes to the question. Recommended Secure Coding Practices
Sensitive Code ExampleIn the examples below, an install script is downloaded using While connections made using HTTPS are generally considered secure,
FROM ubuntu:22.04 # Sensitive RUN curl --tlsv1.2 -sSf -L https://might-redirect.example.com/install.sh | sh
FROM ubuntu:22.04 # Sensitive RUN wget --secure-protocol=TLSv1_2 -q -O - https://might-redirect.example.com/install.sh | sh Compliant SolutionIf you expect the server to redirect the download to a new location, FROM ubuntu:22.04 RUN curl --proto "=https" --tlsv1.2 -sSf -L https://might-redirect.example.com/install.sh | sh
If you expect the server to return the file without redirects, FROM ubuntu:22.04 RUN curl --tlsv1.2 -sSf https://might-redirect.example.com/install.sh | sh
FROM ubuntu:22.04 RUN wget --secure-protocol=TLSv1_2 --max-redirect=0 -q -O - https://might-redirect.example.com/install.sh | sh See
|
||||||||||||
docker:S2612 |
In Unix file system permissions, the " Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesThe most restrictive possible permissions should be assigned to files and directories. To be secure, remove the unnecessary permissions. If required, use Sensitive Code Example# Sensitive ADD --chmod=777 src dst # Sensitive COPY --chmod=777 src dst # Sensitive RUN chmod +x resource # Sensitive RUN chmod u+s resource Compliant SolutionADD --chmod=754 src dst COPY --chown=user:user --chmod=744 src dst RUN chmod u+x resource RUN chmod +t resource See
|
||||||||||||
docker:S4790 |
Cryptographic hash algorithms such as Ask Yourself WhetherThe hashed value is used in a security context like:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesSafer alternatives, such as Sensitive Code ExampleFROM ubuntu:22.04 # Sensitive RUN echo "a40216e7c028e7d77f1aec22d2bbd5f9a357016f go1.20.linux-amd64.tar.gz" | sha1sum -c RUN tar -C /usr/local -xzf go1.20.linux-amd64.tar.gz ENV PATH="$PATH:/usr/local/go/bin" Compliant SolutionFROM ubuntu:22.04 RUN echo "5a9ebcc65c1cce56e0d2dc616aff4c4cedcfbda8cc6f0288cc08cda3b18dcbf1 go1.20.linux-amd64.tar.gz" | sha256sum -c RUN tar -C /usr/local -xzf go1.20.linux-amd64.tar.gz ENV PATH="$PATH:/usr/local/go/bin" See
|
||||||||||||
docker:S6431 |
Using host operating system namespaces can lead to compromise of the host system. Host network sharing could provide a significant performance advantage for workloads that require critical network performance. However, the successful exploitation of this attack vector could have a catastrophic impact on confidentiality within the host. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not use host operating system namespaces. Sensitive Code Example# syntax=docker/dockerfile:1.3 FROM example # Sensitive RUN --network=host wget -O /home/sessions http://127.0.0.1:9000/sessions Compliant Solution# syntax=docker/dockerfile:1.3 FROM example RUN --network=none wget -O /home/sessions http://127.0.0.1:9000/sessions See
|
||||||||||||
docker:S6472 |
Using The In most cases, build-time and environment variables are used to propagate configuration items from the host to the image or container. A typical
example for an environmental variable is the Using The concrete impact of such an issue highly depends on the secret’s purpose and the exposure sphere:
Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
Note that, in both cases, the files exposing the secrets should be securely stored and not exposed to a large sphere. In most cases, using a secret vault or another similar component should be preferred. For example, Docker Swarm provides a secrets service that can be used to handle most confidential data. Sensitive Code ExampleFROM example # Sensitive ARG ACCESS_TOKEN # Sensitive ENV ACCESS_TOKEN=${ACCESS_TOKEN} CMD /run.sh Compliant SolutionFor build time secrets, use Buildkit’s secret mount type instead: FROM example RUN --mount=type=secret,id=build_secret ./installer.sh For runtime secrets, leave the environment variables empty until runtime: FROM example ENV ACCESS_TOKEN="" CMD /run.sh Store the runtime secrets in an environment file (such as docker run --env-file .env myImage See
|
||||||||||||
docker:S6473 |
Exposing administration services can lead to unauthorized access to containers or escalation of privilege inside of containers. A port that is commonly used for administration services is marked as being open through the Removing the Ask Yourself Whether
There is a risk if you answered yes to the question. Recommended Secure Coding Practices
Sensitive Code ExampleFROM ubuntu:22.04 # Sensitive EXPOSE 22 CMD ["/usr/sbin/sshd", "-f", "/etc/ssh/sshd_config", "-D"] See
|
||||||||||||
docker:S6497 |
This rule is deprecated; use S6596 instead. A container image digest uniquely and immutably identifies a container image. A tag, on the other hand, is a mutable reference to a container image. This tag can be updated to point to another version of the container at any point in time. The problem is that pulling such an image prevents the resulting container from being updated or patched in order to remove vulnerabilities or significant bugs. Ask Yourself Whether
There is a risk if you answer yes to this question. Recommended Secure Coding PracticesContainers should get the latest security updates. If there is a need for determinism, the solution is to find tags that are not as prone to change
as To do so, favor a more precise tag that uses semantic versioning and target a major version, for example. Sensitive Code ExampleFROM mongo@sha256:8eb8f46e22f5ccf1feb7f0831d02032b187781b178cb971cd1222556a6cee9d1 RUN echo ls Compliant SolutionHere, mongo:6.0 is better than using a digest, and better than using a more precise version, such as 6.0.4, because it would prevent 6.0.5 security updates: FROM mongo:6.0 RUN echo ls See |
||||||||||||
docker:S6470 |
When building a Docker image from a Dockerfile, a context directory is used and sent to the Docker daemon before the actual build starts. This context directory usually contains the Dockerfile itself, along with all the files that will be necessary for the build to succeed. This generally includes:
The When Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Keep in mind that the content of the context directory might change depending on the build environment and over time. Recommended Secure Coding Practices
Sensitive Code ExampleCopying the complete context directory: FROM ubuntu:22.04 # Sensitive COPY . . CMD /run.sh Copying multiple files and directories whose names are expanded at build time: FROM ubuntu:22.04 # Sensitive COPY ./example* / COPY ./run.sh / CMD /run.sh Compliant SolutionFROM ubuntu:22.04 COPY ./example1 /example1 COPY ./example2 /example2 COPY ./run.sh / CMD /run.sh See
|
||||||||||||
docker:S6471 |
Running containers as a privileged user weakens their runtime security, allowing any user whose code runs on the container to perform
administrative actions. A malicious user can run code on a system either thanks to actions that could be deemed legitimate - depending on internal business logic or operational management shells - or thanks to malicious actions. For example, with arbitrary code execution after exploiting a service that the container hosts. Suppose the container is not hardened to prevent using a shell, interpreter, or Linux capabilities. In this case, the malicious user can read and exfiltrate any file (including Docker volumes), open new network connections, install malicious software, or, worse, break out of the container’s isolation context by exploiting other components. This means giving the possibility to attackers to steal important infrastructure files, intellectual property, or personal data. Depending on the infrastructure’s resilience, attackers may then extend their attack to other services, such as Kubernetes clusters or cloud providers, in order to maximize their reach. Ask Yourself WhetherThis container:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIn the Dockerfile:
Or, at launch time:
If this image is already explicitly set to launch with a non-privileged user, you can add it to the safe images list rule property of your SonarQube instance, without the tag. Sensitive Code ExampleFor any image that does not provide a user by default, regardless of their underlying operating system: # Sensitive FROM alpine ENTRYPOINT ["id"] For multi-stage builds, the last stage is non-compliant if it does not contain the FROM alpine AS builder COPY Makefile ./src / RUN make build USER nonroot # Sensitive, previous user settings are dropped FROM alpine AS runtime COPY --from=builder bin/production /app ENTRYPOINT ["/app/production"] Compliant SolutionFor Linux-based images and scratch-based images that untar a Linux distribution: FROM alpine RUN addgroup -S nonroot \ && adduser -S nonroot -G nonroot USER nonroot ENTRYPOINT ["id"] For Windows-based images, you can use FROM mcr.microsoft.com/windows/servercore:ltsc2019 RUN net user /add nonroot USER nonroot For multi-stage builds, the non-root user should be on the last stage: FROM alpine as builder COPY Makefile ./src / RUN make build FROM alpine as runtime RUN addgroup -S nonroot \ && adduser -S nonroot -G nonroot COPY --from=builder bin/production /app USER nonroot ENTRYPOINT ["/app/production"] For images that use Here is an example that uses FROM alpine:latest as security_provider RUN addgroup -S nonroot \ && adduser -S nonroot -G nonroot FROM scratch as production COPY --from=security_provider /etc/passwd /etc/passwd USER nonroot COPY production_binary /app ENTRYPOINT ["/app/production_binary"] See |
||||||||||||
scala:S1313 |
Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities: Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:
Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact. Ask Yourself WhetherThe disclosed IP address is sensitive, e.g.:
There is a risk if you answered yes to any of these questions. Recommended Secure Coding PracticesDon’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software. Sensitive Code Exampleval ip = "192.168.12.42" // Sensitive val socket = new Socket(ip, 6667) Compliant Solutionval ips = Source.fromFile(configuration_file).getLines.toList // Compliant val socket = new Socket(ips(0), 6667) ExceptionsNo issue is reported for the following cases because they are not considered sensitive:
See
|
||||||||||||
scala:S2068 |
Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source. In the past, it has led to the following vulnerabilities: Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets. This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list. It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", … Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
See
|
||||||||||||
kubernetes:S6428 |
Running containers in privileged mode can reduce the resilience of a cluster in the event of a security incident because it weakens the isolation between hosts and containers. Process permissions in privileged containers are essentially the same as root permissions on the host. If these processes are not protected by
robust security measures, an attacker who compromises a root process on a Pod’s host is likely to gain the ability to pivot within the cluster. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDisable privileged mode. Sensitive Code ExampleapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 protocol: TCP securityContext: privileged: true # Sensitive Compliant SolutionapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 protocol: TCP securityContext: privileged: false See |
||||||||||||
kubernetes:S6865 |
Why is this an issue?Service account tokens are Kubernetes secrets created automatically to authenticate applications running inside pods to the API server. If a pod is compromised, an attacker could use this token to gain access to other resources in the cluster. For example, they could create new pods, modify existing ones, or even delete critical system pods, depending on the permissions associated with the service account. Therefore, it’s recommended to disable the automounting of service account tokens when it’s not necessary for the application running in the pod. What is the potential impact?Unauthorized AccessIf a pod with a mounted service account gets compromised, an attacker could potentially use the token to interact with the Kubernetes API, possibly leading to unauthorized access to other resources in the cluster. Privilege EscalationService account tokens are often bound with roles that have extensive permissions. If these tokens are exposed, it could lead to privilege escalation where an attacker gains higher-level permissions than intended. Data BreachService account tokens can be used to access sensitive data stored in the Kubernetes cluster. If these tokens are compromised, it could lead to a data breach. Denial of ServiceAn attacker with access to a service account token could potentially overload the Kubernetes API server by sending a large number of requests, leading to a Denial of Service (DoS) attack. How to fix itCode examplesNoncompliant code exampleapiVersion: v1 kind: Pod metadata: name: example-pod spec: # Noncompliant containers: - name: example-pod image: nginx:1.25.3 Compliant solutionapiVersion: v1 kind: Pod metadata: name: example-pod spec: containers: - name: example-pod image: nginx:1.25.3 automountServiceAccountToken: false How does this work?The automounting of service account tokens can be disabled by setting ResourcesDocumentation
Standards |
||||||||||||
kubernetes:S6867 |
Why is this an issue?Using wildcards when defining Role-Based Access Control (RBAC) permissions in Kubernetes can lead to significant security issues. This is because it grants overly broad permissions, potentially allowing access to sensitive resources. RBAC is designed to limit the access rights of users within the system by assigning roles to them. These roles define what actions a user can perform and on which resources. When a wildcard is used, it means that the role has access to all resources/verbs, bypassing the principle of least privilege. This principle states that users should have only the minimal permissions they need to perform their job function. What is the potential impact?If an attacker gains access to a role with wildcard permissions, they could potentially read, modify, or delete any resource in the Kubernetes cluster, leading to data breaches, service disruptions, or other malicious activities. How to fix itCode examplesNoncompliant code exampleapiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: example-role rules: - apiGroups: [""] resources: ["*"] # Noncompliant verbs: ["get", "list"] Compliant solutionapiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: example-role rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list"] How does this work?When defining RBAC permissions, it is important to follow the principle of least privilege. By explicitly specifying the verbs and resources a user should have access to instead of using wildcards, it can be ensured that users have only the permissions they need to perform their job function. ResourcesDocumentation
Standards |
||||||||||||
kubernetes:S5332 |
Clear-text protocols such as
Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen. For example, attackers could successfully compromise prior security layers by:
In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle. Note that using the In the past, it has led to the following vulnerabilities: Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding Practices
It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system. Sensitive Code ExampleapiVersion: batch/v1 kind: Job metadata: name: curl spec: template: spec: containers: - name: curl image: curlimages/curl command: ["curl"] args: ["http://example.com/"] # Sensitive Compliant SolutionapiVersion: batch/v1 kind: Job metadata: name: curl spec: template: spec: containers: - name: curl image: curlimages/curl command: ["curl"] args: ["https://example.com/"] See
|
||||||||||||
kubernetes:S6429 |
Exposing Docker sockets can lead to compromise of the host systems. The Docker daemon provides an API to access its functionality, for example through a UNIX domain socket. Mounting the Docker socket into a container allows the container to control the Docker daemon of the host system, resulting in full access over the whole system. A compromised or rogue container with access to the Docker socket could endanger the integrity of the whole Kubernetes cluster. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to never add a Docker socket as a volume to a Pod. Sensitive Code ExampleapiVersion: v1 kind: Pod metadata: name: test spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /var/run/docker.sock name: test-volume volumes: - name: test-volume hostPath: path: /var/run/docker.sock # Sensitive type: Socket Compliant SolutionapiVersion: v1 kind: Pod metadata: name: test spec: containers: - image: k8s.gcr.io/test-webserver name: test-container See
|
||||||||||||
kubernetes:S6433 |
Mounting sensitive file system paths can lead to information disclosure and compromise of the host systems. System paths can contain sensitive information like configuration files or cache files. Those might be used by attackers to expand permissions or to collect information for further attacks. System paths can also contain binaries and scripts that might be executed by the host system periodically. A compromised or rogue container with access to sensitive files could endanger the integrity of the whole Kubernetes cluster. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesIt is recommended to avoid mounting sensitive system file paths into containers. If it is necessary to mount such a path due to the architecture, the least privileges should be given, for instance by making the mount read-only to prevent unwanted modifications. Sensitive Code ExampleapiVersion: v1 kind: Pod metadata: name: test spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /data name: test-volume volumes: - name: test-volume hostPath: path: /etc # Sensitive Compliant SolutionapiVersion: v1 kind: Pod metadata: name: test spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /data name: test-volume volumes: - name: test-volume hostPath: path: /mnt/nfs See |
||||||||||||
kubernetes:S6864 |
Why is this an issue?A memory limit is a configuration that sets the maximum amount of memory that a container can use. It is part of the resource management functionality of Kubernetes, which allows for the control and allocation of computational resources to containers. When a memory limit is set for a container, Kubernetes ensures that the container does not exceed the specified limit. If a container tries to use more memory than its limit, the system will reclaim the excess memory, which could lead to termination of processes within the container. Without a memory limit, a container can potentially consume all available memory on a node, which can lead to unpredictable behavior of the container or the node itself. Therefore, defining a memory limit for each container is a best practice in Kubernetes configurations. It helps in managing resources effectively and ensures that a single container does not monopolize the memory resources of a node. What is the potential impact?Denial of ServiceWithout a memory limit, a container can consume all available memory on a node. This could lead to a Denial of Service (DoS) condition where other containers on the same node are starved of memory. These containers may slow down, become unresponsive, or even crash, affecting the overall functionality and availability of applications running on them. Inefficient Resource AllocationWhen containers lack specified resource requests, the Kubernetes scheduler may not make optimal decisions about pod placement and resource contention management. This could result in the scheduler placing a resource-intensive pod on a node with insufficient resources, leading to performance issues or even node failure. How to fix itCode examplesTo avoid potential issues specify a memory limit for each container. Noncompliant code exampleapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web # Noncompliant image: nginx Compliant solutionapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web image: nginx resources: limits: memory: 100Mi How does this work?A limit can be set through the property ResourcesDocumentation
Standards |
||||||||||||
kubernetes:S6868 |
Why is this an issue?Allowing command execution (exec) for roles in a Kubernetes cluster can pose a significant security risk. This is because it provides the user with the ability to execute arbitrary commands within a container, potentially leading to unauthorized access or data breaches. In a production Kubernetes cluster, exec permissions are typically unnecessary due to the principle of least privilege, which suggests that a user or process should only have the minimum permissions necessary to perform its function. Additionally, containers in production are often treated as immutable infrastructure, meaning they should not be changed once deployed. Any changes should be made to the container image, which is then used to deploy a new container. What is the potential impact?Exploiting Vulnerabilities Within the ContainerIf a user or service has the ability to execute commands within a container, they could potentially identify and exploit vulnerabilities within the container’s software. This could include exploiting known vulnerabilities in outdated software versions, or finding and exploiting new vulnerabilities. This could lead to unauthorized access to the container, allowing the attacker to manipulate its operations or access its data. Installing Malicious SoftwareCommand execution permissions could also be used to install malicious software within a container. This could include malware, spyware, ransomware, or other types of harmful software. Once installed, this software could cause a wide range of issues, from data corruption or loss, to providing a backdoor for further attacks. It could also be used to create a botnet, using the compromised container to launch attacks on other systems. Extracting Sensitive DataIf an attacker has the ability to execute commands within a container, they could potentially access and extract sensitive data. This could include user data, confidential business information, or other types of sensitive data. The extracted data could then be used for a wide range of malicious purposes, from identity theft to corporate espionage. This could lead to significant financial loss, damage to reputation, and potential legal consequences. How to fix itCode examplesNoncompliant code exampleapiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: example-role rules: - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["pods/exec"] # Noncompliant verbs: ["create"] Compliant solutionapiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: example-role rules: - apiGroups: [""] resources: ["pods"] verbs: ["get"] How does this work?The ResourcesDocumentation
Standards |
||||||||||||
kubernetes:S6869 |
Why is this an issue?A CPU limitation for a container is a specified boundary or restriction that determines the maximum amount of CPU resources that a container can utilize. It is a part of resource management in a containerized environment, and it is set to ensure that a single container does not monopolize the CPU resources of the host machine. CPU limitations are important for maintaining a balanced and efficient system. They help in distributing resources fairly among different containers, ensuring that no single container can cause a system-wide slowdown by consuming more than its fair share of CPU resources. What is the potential impact?Performance degradationWithout CPU limitations, a single container could monopolize all available CPU resources, leading to a system-wide slowdown. Other containers or processes on the same host might be deprived of the necessary CPU resources, causing them to function inefficiently. System instabilityIn extreme cases, a container with no CPU limit could cause the host machine to become unresponsive. This can lead to system downtime and potential loss of data, disrupting critical operations and impacting system reliability. How to fix itCode examplesNoncompliant code exampleapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web # Noncompliant image: nginx Compliant solutionapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web image: nginx resources: limits: cpu: 0.5 How does this work?A limit can be set through the property ResourcesDocumentation
Standards |
||||||||||||
kubernetes:S5849 |
Setting capabilities can lead to privilege escalation and container escapes. Linux capabilities allow you to assign narrow slices of In a container, capabilities might allow to access resources from the host system which can result in container escapes. For example, with the
capability Ask Yourself WhetherCapabilities are granted:
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesCapabilities are high privileges, traditionally associated with superuser (root), thus make sure that the most restrictive and necessary capabilities are assigned. Sensitive Code ExampleapiVersion: v1 kind: Pod metadata: name: example spec: containers: - image: k8s.gcr.io/test-webserver name: test-container securityContext: capabilities: add: ["SYS_ADMIN"] # Sensitive Compliant SolutionapiVersion: v1 kind: Pod metadata: name: example spec: containers: - image: k8s.gcr.io/test-webserver name: test-container See
|
||||||||||||
kubernetes:S6431 |
Using host operating system namespaces can lead to compromise of the host systems.
These three items likely include systems that support either the internal operation of the Kubernetes cluster or the enterprise’s internal infrastructure. Opening these points to containers opens new attack surfaces for attackers who have already successfully exploited services exposed by containers. Depending on how resilient the cluster is, attackers can extend their attack to the cluster by compromising the nodes from which the cluster started the process. Host network sharing could provide a significant performance advantage for workloads that require critical network performance. However, the successful exploitation of this attack vector could have a catastrophic impact on confidentiality within the cluster. Ask Yourself Whether
There is a risk if you answered yes to any of those questions. Recommended Secure Coding PracticesDo not use host operating system namespaces. Sensitive Code ExampleapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 protocol: TCP hostPID: true # Sensitive hostIPC: true # Sensitive hostNetwork: true # Sensitive Compliant SolutionapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 protocol: TCP hostPID: false hostIPC: false hostNetwork: false See |
||||||||||||
kubernetes:S6473 |
Exposing administrative services can lead to unauthorized access to pods or escalation of privileges inside pods. A port that is commonly used for administration services is open or marked as being open. Administration services like SSH might contain vulnerabilities, hard-coded credentials, or other security issues that increase the attack surface of a Kubernetes deployment. Even if the ports of the services do not get forwarded to the host system, by default they are reachable from other containers in the same network. A malicious actor that gets access to one container could use such services to escalate access and privileges. If the administrative port is forwarded through a load balancer, then in most cases this port should be removed from the configuration to make sure
it is not reachable externally. Setting the In both cases, it is most secure to not start any administrative services in deployments. Instead, try to access the required information using
Kubernetes’s own administrative tools. For example, to execute code inside a container, Ask Yourself Whether
There is a risk if you answered yes to the question. Recommended Secure Coding Practices
Sensitive Code ExampleapiVersion: v1 kind: Pod metadata: labels: app: example_app spec: containers: - name: applications image: my_image_with_ssh ports: - containerPort: 22 # NonCompliant: Merely informative, removing this property does not # close port 22. apiVersion: apps/v1 kind: Service metadata: name: example_lb spec: type: LoadBalancer ports: - port: 8022 targetPort: 22 # Compliant selector: app: example_app See
|
||||||||||||
kubernetes:S6870 |
Why is this an issue?Ephemeral storage is a type of storage that is temporary and non-persistent, meaning it does not retain data once the process is terminated. In the context of Kubernetes, ephemeral storage is used for storing temporary files that a running container can write and read. The issue at hand pertains to the creation of a container without any defined limits for this ephemeral storage. This means that the container can potentially consume as much ephemeral storage as is available on the node where it is running. What is the potential impact?Resource exhaustionWithout a defined limit, a container can consume all available ephemeral storage on a node. This can lead to resource exhaustion, where no more storage is available for other containers or processes running on the same node. This could cause these other containers or processes to fail or perform poorly. Unpredictable application behaviorIf a container exhausts the available ephemeral storage, it can lead to unpredictable application behavior. For instance, if an application attempts to write to the ephemeral storage and there is no space left, it may crash or exhibit other unexpected behaviors. How to fix itCode examplesNoncompliant code exampleapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web # Noncompliant image: nginx volumeMounts: - name: ephemeral mountPath: "/tmp" Compliant solutionapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web image: nginx resources: limits: ephemeral-storage: "2Gi" volumeMounts: - name: ephemeral mountPath: "/tmp" How does this work?A limit can be set through the property ResourcesDocumentation
Standards |
||||||||||||
kubernetes:S6430 |
Allowing process privilege escalations exposes the Pod to attacks that exploit setuid binaries. This field directly controls whether the Depending on how resilient the Kubernetes cluster and Pods are, attackers can extend their attack to the cluster by compromising the nodes from which the cluster started the Pod. The Ask Yourself Whether
There is a risk if you answered yes to all of these questions. Recommended Secure Coding PracticesDisable privilege escalation. Sensitive Code ExampleapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 protocol: TCP securityContext: allowPrivilegeEscalation: true # Sensitive Compliant SolutionapiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 protocol: TCP securityContext: allowPrivilegeEscalation: false See
|