SonarQube Vulnerability Report

Report Generated On
Friday, May 10, 2024
Project Name/URL
Application
Release
Branch
main
Delta Analysis
No

Summary of the Detected Vulnerabilities

Severity Number of Issues
HIGH 166
MEDIUM 28
LOW 64

Detail of the Detected Vulnerabilities

Rule Severity Component Line Description Message Status
secrets:S6706 HIGH lib/insecurity.ts 23 Cryptographic private keys should not be disclosed Make sure this private key gets revoked, changed, and removed from the code. OPEN
typescript:S2068 HIGH frontend/src/app/Services/two-factor-auth-service.spec.ts 64 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/Services/two-factor-auth-service.spec.ts 80 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/oauth/oauth.component.spec.ts 85 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/oauth/oauth.component.spec.ts 85 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/oauth/oauth.component.spec.ts 92 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 117 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 135 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 136 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 153 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 153 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/2faSpec.ts 169 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/2faSpec.ts 195 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/addressApiSpec.ts 20 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/basketApiSpec.ts 25 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/basketApiSpec.ts 101 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/basketItemApiSpec.ts 21 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 56 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 77 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 108 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 140 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 174 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 205 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 250 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 287 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 295 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 22 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 49 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 78 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 113 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 153 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 195 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 235 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 283 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 333 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deliveryApiSpec.ts 23 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deliveryApiSpec.ts 52 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deliveryApiSpec.ts 83 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deliveryApiSpec.ts 111 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 35 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 53 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 71 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 89 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 105 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 129 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 149 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 170 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 191 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 18 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 37 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 64 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 80 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 99 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 119 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 140 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/feedbackApiSpec.ts 119 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/feedbackApiSpec.ts 152 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 21 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 30 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 46 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 64 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 79 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 94 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 109 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 124 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 142 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 245 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 266 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/memoryApiSpec.ts 26 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/memoryApiSpec.ts 64 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/memoryApiSpec.ts 91 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 19 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 56 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 73 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 90 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 109 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 129 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 149 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/passwordApiSpec.ts 20 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/passwordApiSpec.ts 29 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/passwordApiSpec.ts 47 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/passwordApiSpec.ts 93 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/paymentApiSpec.ts 20 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/productReviewApiSpec.ts 111 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/productReviewApiSpec.ts 131 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/profileImageUploadSpec.ts 25 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/profileImageUploadSpec.ts 52 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/profileImageUploadSpec.ts 97 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/profileImageUploadSpec.ts 123 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 21 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 38 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 55 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 72 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 93 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 114 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 137 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 155 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 173 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 190 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 207 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 228 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 249 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 269 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 292 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 309 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 326 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/securityAnswerApiSpec.ts 44 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 42 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 60 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 82 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 100 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 106 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 118 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 136 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 158 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 180 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 199 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 260 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 271 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userProfileSpec.ts 19 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/walletApiSpec.ts 18 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/administration.spec.ts 5 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/b2bOrder.spec.ts 6 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/b2bOrder.spec.ts 37 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/basket.spec.ts 4 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/basket.spec.ts 76 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/changePassword.spec.ts 6 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/changePassword.spec.ts 25 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/changePassword.spec.ts 31 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/chatbot.spec.ts 3 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/complain.spec.ts 5 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/contact.spec.ts 11 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/contact.spec.ts 47 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/dataErasure.spec.ts 3 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/dataExport.spec.ts 24 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/deluxe.spec.ts 4 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/deluxe.spec.ts 21 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/noSql.spec.ts 8 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/noSql.spec.ts 53 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/noSql.spec.ts 76 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/noSql.spec.ts 120 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/profile.spec.ts 3 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 10 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 28 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 29 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 60 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 61 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 84 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/restApi.spec.ts 4 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/restApi.spec.ts 82 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/search.spec.ts 56 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/search.spec.ts 83 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/totpSetup.spec.ts 6 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/totpSetup.spec.ts 20 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
docker:S6504 HIGH Dockerfile 47 Allowing non-root users to modify resources copied to an image is security-sensitive Make sure no write permissions are assigned to the executable. TO_REVIEW
docker:S6504 HIGH Dockerfile 48 Allowing non-root users to modify resources copied to an image is security-sensitive Make sure no write permissions are assigned to the executable. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/about/about.component.ts 84 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/administration/administration.component.ts 50 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/administration/administration.component.ts 65 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/data-export/data-export.component.ts 45 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/last-login-ip/last-login-ip.component.ts 36 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/score-board-legacy/score-board-legacy.component.ts 216 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/score-board/score-board.component.ts 71 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/search-result/search-result.component.ts 125 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/search-result/search-result.component.ts 151 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/track-result/track-result.component.ts 41 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S5852 MEDIUM frontend/src/app/change-password/change-password.component.ts 36 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM lib/codingChallenges.ts 66 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM lib/codingChallenges.ts 67 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM lib/startup/registerWebsocketEvents.ts 48 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM lib/utils.ts 216 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM routes/profileImageUrlUpload.ts 19 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM server.ts 227 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM test/api/metricsApiSpec.ts 17 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM test/cypress/support/commands.ts 36 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5693 MEDIUM server.ts 640 Allowing requests with excessive content length is security-sensitive Make sure the content length limit is safe here. TO_REVIEW
typescript:S5693 MEDIUM server.ts 646 Allowing requests with excessive content length is security-sensitive Make sure the content length limit is safe here. TO_REVIEW
typescript:S5693 MEDIUM server.ts 647 Allowing requests with excessive content length is security-sensitive Make sure the content length limit is safe here. TO_REVIEW
docker:S6471 MEDIUM test/smoke/Dockerfile 1 Running containers as a privileged user is security-sensitive The alpine image runs with root as the default user. Make sure it is safe here. TO_REVIEW
docker:S6470 MEDIUM Dockerfile 2 Recursively copying context directories is security-sensitive Copying recursively might inadvertently add sensitive data to the container. Make sure it is safe here. TO_REVIEW
typescript:S1523 MEDIUM routes/captcha.ts 23 Dynamically executing code is security-sensitive Make sure that this dynamic injection or execution of code is safe. TO_REVIEW
typescript:S1523 MEDIUM routes/userProfile.ts 36 Dynamically executing code is security-sensitive Make sure that this dynamic injection or execution of code is safe. TO_REVIEW
typescript:S1523 MEDIUM test/cypress/e2e/contact.spec.ts 258 Dynamically executing code is security-sensitive Make sure that this dynamic injection or execution of code is safe. TO_REVIEW
typescript:S2245 MEDIUM data/datacreator.ts 226 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM data/datacreator.ts 244 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM data/datacreator.ts 292 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM data/datacreator.ts 670 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM frontend/src/app/code-snippet/code-snippet.component.ts 146 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM lib/insecurity.ts 55 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 15 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 16 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 17 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 19 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 20 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
docker:S5332 LOW test/smoke/Dockerfile 7 Using clear-text protocols is security-sensitive Make sure that using clear-text protocols is safe here. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_1.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_1.ts 7 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_1.ts 9 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_2.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_2.ts 7 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_2.ts 9 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_3.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_3.ts 7 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_3.ts 9 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_4_correct.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_4_correct.ts 7 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_4_correct.ts 9 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_1.ts 5 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_1.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_1.ts 8 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_2.ts 5 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_2.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_2.ts 8 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_3_correct.ts 3 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_3_correct.ts 4 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_3_correct.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_4.ts 5 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_4.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_4.ts 8 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW frontend/src/app/order-completion/order-completion.component.spec.ts 136 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW frontend/src/app/score-board-legacy/score-board-legacy.component.spec.ts 281 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW frontend/src/app/score-board-legacy/score-board-legacy.component.spec.ts 290 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW lib/insecurity.ts 135 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW lib/insecurity.ts 136 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW lib/insecurity.ts 138 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/cypress/e2e/profile.spec.ts 74 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/cypress/e2e/profile.spec.ts 107 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/redirectSpec.ts 42 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/redirectSpec.ts 78 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/utilsSpec.ts 36 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/utilsSpec.ts 40 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 86 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 95 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 104 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 113 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 123 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 132 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 141 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S4507 LOW server.ts 634 Delivering code in production with debug features activated is security-sensitive Make sure this debug feature is deactivated before delivering the code in production. TO_REVIEW
typescript:S5122 LOW server.ts 164 Having a permissive Cross-Origin Resource Sharing policy is security-sensitive Make sure that enabling CORS is safe here. TO_REVIEW
typescript:S5122 LOW server.ts 165 Having a permissive Cross-Origin Resource Sharing policy is security-sensitive Make sure that enabling CORS is safe here. TO_REVIEW
typescript:S1313 LOW test/api/loginApiSpec.ts 253 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 1.2.3.4 is safe here. TO_REVIEW
typescript:S1313 LOW test/api/loginApiSpec.ts 257 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 1.2.3.4 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 14 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 2001:0db8:85a3:0000:0000:8a2e:0370:7334 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 14 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 2001:0db8:85a3:0000:0000:8a2e:0370:7334 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 18 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 0:0:0:0:0:ffff:7f00:1 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 18 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 0:0:0:0:0:ffff:7f00:1 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 26 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address ::ffff:192.0.2.128 is safe here. TO_REVIEW
typescript:S4790 LOW lib/insecurity.ts 43 Using weak hashing algorithms is security-sensitive Make sure this weak hash algorithm is not used in a sensitive context here. TO_REVIEW
Web:S5725 LOW frontend/src/index.html 15 Using remote artifacts without integrity checks is security-sensitive Make sure not using resource integrity feature is safe here. TO_REVIEW
Web:S5725 LOW frontend/src/index.html 16 Using remote artifacts without integrity checks is security-sensitive Make sure not using resource integrity feature is safe here. TO_REVIEW
docker:S6500 LOW Dockerfile 25 Automatically installing recommended packages is security-sensitive Make sure automatically installing recommended packages is safe here. TO_REVIEW
javascript:S4790 LOW Gruntfile.js 76 Using weak hashing algorithms is security-sensitive Make sure this weak hash algorithm is not used in a sensitive context here. TO_REVIEW
docker:S6505 LOW Dockerfile 4 Allowing shell scripts execution during package installation is security-sensitive Omitting --ignore-scripts can lead to the execution of shell scripts. Make sure it is safe here. TO_REVIEW
docker:S6505 LOW Dockerfile 5 Allowing shell scripts execution during package installation is security-sensitive Omitting --ignore-scripts can lead to the execution of shell scripts. Make sure it is safe here. TO_REVIEW
docker:S6505 LOW Dockerfile 19 Allowing shell scripts execution during package installation is security-sensitive Omitting --ignore-scripts can lead to the execution of shell scripts. Make sure it is safe here. TO_REVIEW
Web:S5148 LOW frontend/src/app/nft-unlock/nft-unlock.component.html 63 Authorizing an opened window to access back to the originating window is security-sensitive Make sure not using rel="noopener" is safe here. TO_REVIEW
Web:S5148 LOW frontend/src/app/nft-unlock/nft-unlock.component.html 80 Authorizing an opened window to access back to the originating window is security-sensitive Make sure not using rel="noopener" is safe here. TO_REVIEW

Known Security Rules

Rule Description
azureresourcemanager:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Databases

Code examples

The following code samples are equivalent For Azure Database for MySQL servers, Azure Database for PostgreSQL servers, and Azure Database for MariaDB servers.

For all of these, there is no minimal TLS version enforced by default.

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "name": "example",
      "properties": {
        "minimalTlsVersion": "TLS1_0"
      }
    }
  ]
}
resource mysqlDbServer 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  name: 'example'
  properties: {
    minimalTlsVersion: 'TLS1_0' // Noncompliant
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "name": "example",
      "properties": {
        "minimalTlsVersion": "TLS1_2"
      }
    }
  ]
}
resource mysqlDbServer 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  name: 'example'
  properties: {
    minimalTlsVersion: 'TLS1_2'
  }
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

azureresourcemanager:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

Using publicNetworkAccess to control access to resources:

resource exampleSite 'Microsoft.Web/sites@2020-12-01' = {
  name: 'example-site'
  properties: {
    publicNetworkAccess: 'Enabled'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example-site",
      "properties": {
        "siteConfig": {
          "publicNetworkAccess": "Enabled"
        }
      }
    }
  ]
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example",
      "resources": [
        {
          "type": "config",
          "apiVersion": "2020-12-01",
          "name": "example-config",
          "properties": {
            "publicNetworkAccess": "Enabled"
          }
        }
      ]
    }
  ]
}

Using IP address ranges to control access to resources:

resource exampleFirewall 'Microsoft.Sql/servers/firewallRules@2014-04-01' = {
  name: 'example-firewall'
  properties: {
    startIpAddress: '0.0.0.0'
    endIpAddress: '255.255.255.255'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers/firewallRules",
      "apiVersion": "2014-04-01",
      "name": "example-firewall",
      "properties": {
        "startIpAddress": "0.0.0.0",
        "endIpAddress": "255.255.255.255"
      }
    }
  ]
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers",
      "apiVersion": "2014-04-01",
      "name": "example-database",
      "resources": [
        {
          "type": "firewallRules",
          "apiVersion": "2014-04-01",
          "name": "example-firewall",
          "properties": {
            "startIpAddress": "0.0.0.0",
            "endIpAddress": "255.255.255.255"
          }
        }
      ]
    }
  ]
}

Compliant Solution

Using publicNetworkAccess to control access to resources:

resource exampleSite 'Microsoft.Web/sites@2020-12-01' = {
  name: 'example-site'
  properties: {
    publicNetworkAccess: 'Disabled'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example-site",
      "properties": {
        "siteConfig": {
          "publicNetworkAccess": "Disabled"
        }
      }
    }
  ]
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example-site",
      "resources": [
        {
          "type": "config",
          "apiVersion": "2020-12-01",
          "name": "example-config",
          "properties": {
            "publicNetworkAccess": "Disabled"
          }
        }
      ]
    }
  ]
}

Using IP address ranges to control access to resources:

resource exampleFirewall 'Microsoft.Sql/servers/firewallRules@2014-04-01' = {
  name: 'example-firewall'
  properties: {
    startIpAddress: '192.168.0.0'
    endIpAddress: '192.168.255.255'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers/firewallRules",
      "apiVersion": "2014-04-01",
      "name": "example-firewall",
      "properties": {
        "startIpAddress": "192.168.0.0",
        "endIpAddress": "192.168.255.255"
      }
    }
  ]
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers",
      "apiVersion": "2014-04-01",
      "name": "example-database",
      "resources": [
        {
          "type": "firewallRules",
          "apiVersion": "2014-04-01",
          "name": "example-firewall",
          "properties": {
            "startIpAddress": "192.168.0.0",
            "endIpAddress": "192.168.255.255"
          }
        }
      ]
    }
  ]
}

See

azureresourcemanager:S6378

Disabling Managed Identities can reduce an organization’s ability to protect itself against configuration faults and credential leaks.

Authenticating via managed identities to an Azure resource solely relies on an API call with a non-secret token. The process is inner to Azure: secrets used by Azure are not even accessible to end-users.

In typical scenarios without managed identities, the use of credentials can lead to mistakenly leaving them in code bases. In addition, configuration faults may also happen when storing these values or assigning them permissions.

By transparently taking care of the Azure Active Directory authentication, Managed Identities allow getting rid of day-to-day credentials management.

Ask Yourself Whether

The resource:

  • Needs to authenticate to Azure resources that support Azure Active Directory (AAD).
  • Uses a different Access Control system that doesn’t guarantee the same security controls as AAD, or no Access Control system at all.

There is a risk if you answered yes to all of those questions.

Recommended Secure Coding Practices

Enable the Managed Identities capabilities of this Azure resource. If supported, use a System-Assigned managed identity, as:

  • It cannot be shared across resources.
  • Its life cycle is deeply tied to the life cycle of its Azure resource.
  • It provides a unique independent identity.

Alternatively, User-Assigned Managed Identities can also be used but don’t guarantee the properties listed above.

Sensitive Code Example

Using ARM templates:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "apiManagementService"
        }
    ]
}

Using Bicep:

resource sensitiveApiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = {
  name: 'apiManagementService'
  // Sensitive: no Managed Identity is defined
}

Compliant Solution

Using ARM templates:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "apiManagementService",
            "identity": {
                "type": "SystemAssigned"
            }
        }
    ]
}

Using Bicep:

resource sensitiveApiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = {
  name: 'apiManagementService'
  identity: {
    type: 'SystemAssigned'
  }
}

See

azureresourcemanager:S6648

Azure Resource Manager templates define parameters as a way to reuse templates in different environments. Secure parameters (secure strings and secure objects) should not be assigned a default value.

Why is this an issue?

Parameters with the type securestring and secureObject are designed to pass sensitive data to the resources being deployed. Unlike other data types, they cannot be accessed after the deployment is completed. They can neither be logged nor used as an output.

Secure parameters can be assigned a default value which will be used if the parameter is not supplied. This default value is not protected and is stored in cleartext in the deployment history.

What is the potential impact?

If the default value contains a secret, it will be disclosed to all accounts that have read access to the deployment history.

How to fix it in ARM templates

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "secretValue": {
      "type": "securestring",
      "defaultValue": "S3CR3T"
    }
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "secretValue": {
      "type": "securestring"
    }
  }
}

Resources

Documentation

Standards

azureresourcemanager:S6656

When using nested deployments in Azure, template expressions can be evaluated within the scope of the parent template or the scope of the nested template. If such a template expression evaluates a secure value of the parent template, it is possible to expose this value in the deployment history.

Why is this an issue?

Parameters with the type securestring and secureObject are designed to pass sensitive data to the resources being deployed. Secure parameters cannot be accessed after the deployment is completed: they can neither be logged nor used as an output.

When used in nested deployments, however, it is possible to embed secure parameters in such a way they can be visible afterward.

What is the potential impact?

If the nested deployment contains a secure parameter in this way, then the value of this parameter may be readable in the deployment history. This can lead to important credentials being leaked to unauthorized accounts.

How to fix it in ARM Templates

By setting properties.expressionEvaluationOptions.scope to Inner in the parent template, template evaluations are limited to the scope of the nested template. This makes it impossible to expose secure parameters defined in the parent template.

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "adminUsername": {
      "type": "securestring",
      "defaultValue": "[newGuid()]"
    }
  },
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Resources/deployments",
      "apiVersion": "2022-09-01",
      "properties": {
        "mode": "Incremental",
        "template": {
          "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
          "contentVersion": "1.0.0.0",
          "resources": [
            {
              "name": "example",
              "type": "Microsoft.Compute/virtualMachines",
              "apiVersion": "2022-11-01",
              "properties": {
                "osProfile": {
                  "adminUsername": "[parameters('adminUsername')]"
                }
              }
            }
          ]
        }
      }
    }
  ]
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Resources/deployments",
      "apiVersion": "2022-09-01",
      "properties": {
        "expressionEvaluationOptions": {
          "scope": "Inner"
        },
        "mode": "Incremental",
        "template": {
          "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
          "contentVersion": "1.0.0.0",
          "parameters": {
            "adminUsername": {
              "type": "securestring",
              "defaultValue": "[newGuid()]"
            }
          },
          "resources": [
            {
              "name": "example",
              "type": "Microsoft.Compute/virtualMachines",
              "apiVersion": "2022-11-01",
              "properties": {
                "osProfile": {
                  "adminUsername": "[parameters('adminUsername')]"
                }
              }
            }
          ]
        }
      }
    }
  ]
}

Resources

Documentation

Standards

azureresourcemanager:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against tampering or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

For Microsoft.Web/sites:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "name": "example",
      "apiVersion": "2022-09-01",
      "properties": {
        "httpsOnly": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites@2022-03-01' = {
  properties: {
    httpsOnly: false // Sensitive
  }
}

For Microsoft.Web/sites/config:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites/config",
      "name": "sites/example",
      "apiVersion": "2022-09-01",
      "properties": {
        "ftpsState": "AllAllowed"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites/config@2022-09-01' = {
  properties: {
    ftpsState: 'AllAllowed' // Sensitive
  }
}

For Microsoft.Storage/storageAccounts:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "name": "example",
      "apiVersion": "2022-09-01",
      "properties": {
        "supportsHttpsTrafficOnly": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    supportsHttpsTrafficOnly: false // Sensitive
  }
}

For Microsoft.ApiManagement/service/apis:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ApiManagement/service/apis",
      "name": "service/example",
      "apiVersion": "2022-08-01",
      "properties": {
        "protocols": ["http"]
      }
    }
  ]
}
resource symbolicname 'Microsoft.ApiManagement/service/apis@2022-08-01' = {
  properties: {
    protocols: ['http'] // Sensitive
  }
}

For Microsoft.Cdn/profiles/endpoints:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cdn/profiles/endpoints",
      "name": "profiles/example",
      "apiVersion": "2021-06-01",
      "properties": {
        "isHttpAllowed": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cdn/profiles/endpoints@2021-06-01' = {
  properties: {
    isHttpAllowed: true // Sensitive
  }
}

For Microsoft.Cache/redisEnterprise/databases:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cache/redisEnterprise/databases",
      "name": "redisEnterprise/example",
      "apiVersion": "2022-01-01",
      "properties": {
        "clientProtocol": "Plaintext"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cache/redisEnterprise/databases@2022-01-01' = {
  properties: {
    clientProtocol: 'Plaintext' // Sensitive
  }
}

For Microsoft.DBforMySQL/servers, Microsoft.DBforMariaDB/servers, and Microsoft.DBforPostgreSQL/servers:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "name": "example",
      "apiVersion": "2017-12-01",
      "properties": {
        "sslEnforcement": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    sslEnforcement: 'Disabled' // Sensitive
  }
}

Compliant Solution

For Microsoft.Web/sites:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "name": "example",
      "apiVersion": "2022-09-01",
      "properties": {
        "httpsOnly": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites@2022-03-01' = {
  properties: {
    httpsOnly: true
  }
}

For Microsoft.Web/sites/config:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites/config",
      "name": "sites/example",
      "apiVersion": "2022-09-01",
      "properties": {
        "ftpsState": "FtpsOnly"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites/config@2022-09-01' = {
  properties: {
    ftpsState: 'FtpsOnly'
  }
}

For Microsoft.Storage/storageAccounts:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "name": "example",
      "apiVersion": "2022-09-01",
      "properties": {
        "supportsHttpsTrafficOnly": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    supportsHttpsTrafficOnly: true
  }
}

For Microsoft.ApiManagement/service/apis:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ApiManagement/service/apis",
      "name": "service/example",
      "apiVersion": "2022-08-01",
      "properties": {
        "protocols": ["https"]
      }
    }
  ]
}
resource symbolicname 'Microsoft.ApiManagement/service/apis@2022-08-01' = {
  properties: {
    protocols: ['https']
  }
}

For Microsoft.Cdn/profiles/endpoints:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cdn/profiles/endpoints",
      "name": "profiles/example",
      "apiVersion": "2021-06-01",
      "properties": {
        "isHttpAllowed": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cdn/profiles/endpoints@2021-06-01' = {
  properties: {
    isHttpAllowed: false
  }
}

For Microsoft.Cache/redisEnterprise/databases:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cache/redisEnterprise/databases",
      "name": "redisEnterprise/example",
      "apiVersion": "2022-01-01",
      "properties": {
        "clientProtocol": "Encrypted"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cache/redisEnterprise/databases@2022-01-01' = {
  properties: {
    clientProtocol: 'Encrypted'
  }
}

For Microsoft.DBforMySQL/servers, Microsoft.DBforMariaDB/servers, and Microsoft.DBforPostgreSQL/servers:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "name": "example",
      "apiVersion": "2017-12-01",
      "properties": {
        "sslEnforcement": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    sslEnforcement: 'Enabled'
  }
}

See

azureresourcemanager:S6388

Using unencrypted cloud storages can lead to data exposure. In the case that adversaries gain physical access to the storage medium they are able to access unencrypted information.

Ask Yourself Whether

  • The service contains sensitive information that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt cloud storages that contain sensitive information.

Sensitive Code Example

For Microsoft.AzureArcData/sqlServerInstances/databases:

Disabled encryption on SQL service instance database:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "databases/example",
      "type": "Microsoft.AzureArcData/sqlServerInstances/databases",
      "apiVersion": "2023-03-15-preview",
      "properties": {
        "databaseOptions": {
          "isEncrypted": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.AzureArcData/sqlServerInstances/databases@2023-03-15-preview' = {
  properties: {
    databaseOptions: {
      isEncrypted: false
    }
  }
}

For Microsoft.Compute/disks, encryption is disabled by default.

For Microsoft.Compute/snapshots:

Disabled disk encryption with settings collection:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryptionSettingsCollection": {
          "enabled": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = {
  properties: {
    encryptionSettingsCollection: {
      enabled: false
    }
  }
}

For Microsoft.Compute/virtualMachines:

Disabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "securityProfile": {
          "encryptionAtHost": false
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    securityProfile: {
      encryptionAtHost: false
    }
  }
}

Disabled encryption for managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "dataDisks": [
            {
              "id": "myDiskId"
            }
          ]
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      dataDisks: [
        {
          name: 'myDisk'
        }
      ]
    }
  }
}

Disabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "encryptionSettings": {
              "enabled": false
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        encryptionSettings: {
          enabled: false
        }
      }
    }
  }
}

Disabled encryption for OS managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "managedDisk": {
              "id": "myDiskId"
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        managedDisk: {
          id: 'myDiskId'
        }
      }
    }
  }
}

For Microsoft.Compute/virtualMachineScaleSets:

Disabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "securityProfile": {
            "encryptionAtHost": false
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      securityProfile: {
        encryptionAtHost: false
      }
    }
  }
}

Disabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "dataDisks": [
              {
                "name": "myDataDisk"
              }
            ]
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        dataDisks: [
          {
            name: 'myDataDisk'
          }
        ]
      }
    }
  }
}

Disabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "name": "myOsDisk"
            }
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        osDisk: {
          name: 'myOsDisk'
        }
      }
    }
  }
}

For Microsoft.ContainerService/managedClusters:

Disabled encryption at host and set the disk encryption set ID:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-02-preview",
      "properties": {
        "agentPoolProfiles": [
          {
            "enableEncryptionAtHost": false
          }
        ]
      }
    }
  ]
}
resource symbolicname 'Microsoft.ContainerService/managedClusters@2023-03-02-preview' = {
  properties: {
    agentPoolProfiles: [
      {
        enableEncryptionAtHost: false
      }
    ]
  }
}

For Microsoft.DataLakeStore/accounts:

Disabled encryption for Data Lake Store:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DataLakeStore/accounts",
      "apiVersion": "2016-11-01",
      "properties": {
        "encryptionState": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DataLakeStore/accounts@2016-11-01' = {
  properties: {
    encryptionState: 'Disabled'
  }
}

For Microsoft.DBforMySQL/servers:

Disabled infrastructure double encryption for MySQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    infrastructureEncryption: 'Disabled'
  }
}

For Microsoft.DBforPostgreSQL/servers:

Disabled infrastructure double encryption for PostgreSQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DBforPostgreSQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforPostgreSQL/servers@2017-12-01' = {
  properties: {
    infrastructureEncryption: 'Disabled'
  }
}

For Microsoft.DocumentDB/cassandraClusters/dataCenters:

Disabled encryption for a Cassandra Cluster datacenter’s managed disk and backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "cassandraClusters/example",
      "type": "Microsoft.DocumentDB/cassandraClusters/dataCenters",
      "apiVersion": "2023-04-15",
      "properties": {
        "diskCapacity": 4
      }
    }
  ]
}
resource symbolicname 'Microsoft.DocumentDB/cassandraClusters/dataCenters@2023-04-15' = {
  name: 'string'
  parent: parent
  properties: {
    diskCapacity: 4
  }
}

For Microsoft.HDInsight/clusters:

Disabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": false
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = {
  properties: {
    computeProfile: {
      roles: [
        {
          encryptDataDisks: false
        }
      ]
    }
  }
}

Disabled encryption for data disk at application level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "clusters/example",
      "type": "Microsoft.HDInsight/clusters/applications",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": false
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters/applications@2021-06-01' = {
  properties: {
    computeProfile: {
      roles: [
        {
          encryptDataDisks: false
        }
      ]
    }
  }
}

Disabled encryption for resource disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "diskEncryptionProperties": {
          "encryptionAtHost": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = {
  properties: {
    diskEncryptionProperties: {
      encryptionAtHost: false
    }
  }
}

For Microsoft.Kusto/clusters:

Disabled encryption for disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Kusto/clusters",
      "apiVersion": "2022-12-29",
      "properties": {
        "enableDiskEncryption": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Kusto/clusters@2022-12-29' = {
  properties: {
    enableDiskEncryption: false
  }
}

For Microsoft.RecoveryServices/vaults:

Disabled encryption for disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "properties": {
        "encryption": {
          "infrastructureEncryption": "Disabled"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.RecoveryServices/vaults@2023-01-01' = {
  properties: {
    encryption: {
      infrastructureEncryption: 'Disabled'
    }
  }
}

Disabled encryption on infastructure for backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "vaults/example",
      "type": "Microsoft.RecoveryServices/vaults/backupEncryptionConfigs",
      "apiVersion": "2023-01-01",
      "properties": {
        "infrastructureEncryptionState": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.RecoveryServices/vaults/backupEncryptionConfigs@2023-01-01' = {
  properties: {
    encryptionAtRestType: '{CustomerManaged | MicrosoftManaged}'
    infrastructureEncryptionState: 'Disabled'
  }
}

For Microsoft.RedHatOpenShift/openShiftClusters:

Disabled disk encryption for master profile and worker profiles:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.RedHatOpenShift/openShiftClusters",
      "apiVersion": "2022-09-04",
      "properties": {
        "masterProfile": {
          "encryptionAtHost": "Disabled"
        },
        "workerProfiles": [
          {
            "encryptionAtHost": "Disabled"
          }
        ]
      }
    }
  ]
}
resource symbolicname 'Microsoft.RedHatOpenShift/openShiftClusters@2022-09-04' = {
  properties: {
    masterProfile: {
      encryptionAtHost: 'Disabled'
    }
    workerProfiles: [
      {
        encryptionAtHost: 'Disabled'
      }
    ]
  }
}

For Microsoft.SqlVirtualMachine/sqlVirtualMachines:

Disabled encryption for SQL Virtual Machine:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines",
      "apiVersion": "2022-08-01-preview",
      "properties": {
        "autoBackupSettings": {
          "enableEncryption": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.SqlVirtualMachine/sqlVirtualMachines@2022-08-01-preview' = {
  properties: {
    autoBackupSettings: {
      enableEncryption: false
    }
  }
}

For Microsoft.Storage/storageAccounts:

Disabled enforcing of infrastructure encryption for double encryption of data:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2022-09-01",
      "properties": {
        "encryption": {
          "requireInfrastructureEncryption": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    encryption: {
      requireInfrastructureEncryption: false
    }
  }
}

For Microsoft.Storage/storageAccounts/encryptionScopes:

Disabled enforcing of infrastructure encryption for double encryption of data at encryption scope level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "storageAccounts/example",
      "type": "Microsoft.Storage/storageAccounts/encryptionScopes",
      "apiVersion": "2022-09-01",
      "properties": {
        "requireInfrastructureEncryption": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts/encryptionScopes@2022-09-01' = {
  properties: {
    requireInfrastructureEncryption: false
  }
}

Compliant Solution

For Microsoft.AzureArcData/sqlServerInstances/databases:

Enabled encryption on SQL service instance database:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "databases/example",
      "type": "Microsoft.AzureArcData/sqlServerInstances/databases",
      "apiVersion": "2023-03-15-preview",
      "properties": {
        "databaseOptions": {
          "isEncrypted": true
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.AzureArcData/sqlServerInstances/databases@2023-03-15-preview' = {
  properties: {
    databaseOptions: {
      isEncrypted: true
    }
  }
}

For Microsoft.Compute/disks:

Enabled encryption for managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/disks",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryption": {
          "diskEncryptionSetId": "string",
          "type": "string"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/disks@2022-07-02' = {
  properties: {
    encryption: {
      diskEncryptionSetId: 'string'
      type: 'string'
    }
  }
}

Enabled encryption through setting encryptionSettingsCollection:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/disks",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryptionSettingsCollection": {
          "enabled": true,
          "encryptionSettings": [
            {
              "diskEncryptionKey": {
                "secretUrl": "string",
                "sourceVault": {
                  "id": "string"
                }
              }
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/disks@2022-07-02' = {
  properties: {
    encryptionSettingsCollection: {
      enabled: true
      encryptionSettings: [
        {
          diskEncryptionKey: {
            secretUrl: 'string'
            sourceVault: {
              id: 'string'
            }
          }
        }
      ]
    }
  }
}

Enabled encryption through a security profile for an OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/disks",
      "apiVersion": "2022-07-02",
      "properties": {
        "securityProfile": {
          "secureVMDiskEncryptionSetId": "string",
          "securityType": "{'ConfidentialVM_DiskEncryptedWithCustomerKey' | 'ConfidentialVM_DiskEncryptedWithPlatformKey' | 'ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey' | 'TrustedLaunch'}"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/disks@2022-07-02' = {
  properties: {
    securityProfile: {
      secureVMDiskEncryptionSetId: 'string'
      securityType: '{ConfidentialVM_DiskEncryptedWithCustomerKey | ConfidentialVM_DiskEncryptedWithPlatformKey | ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey | TrustedLaunch}'
    }
  }
}

For Microsoft.Compute/snapshots:

Enabled disk encryption for snapshot:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryption": {
          "diskEncryptionSetId": "string",
          "type": "{'EncryptionAtRestWithCustomerKey' | 'EncryptionAtRestWithPlatformAndCustomerKeys' | 'EncryptionAtRestWithPlatformKey'}"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = {
  properties: {
    encryption: {
      diskEncryptionSetId: 'string'
      type: '{EncryptionAtRestWithCustomerKey | EncryptionAtRestWithPlatformAndCustomerKeys | EncryptionAtRestWithPlatformKey}'
    }
  }
}

Enabled disk encryption with settings collection:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryptionSettingsCollection": {
          "enabled": true,
          "encryptionSettings": [
            {
              "diskEncryptionKey": {
                "secretUrl": "",
                "sourceVault": {
                  "id": "string"
                }
              }
            }
          ],
          "encryptionSettingsVersion": "{'1.0' | '1.1'}"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = {
  properties: {
    encryptionSettingsCollection: {
      enabled: true
      encryptionSettings: [
        {
          diskEncryptionKey: {
            secretUrl: ''
            sourceVault: {
              id: 'string'
            }
          }
        }
      ]
      encryptionSettingsVersion: '{1.0 | 1.1}'
    }
  }
}

Enabled disk encryption through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "securityProfile": {
          "secureVMDiskEncryptionSetId": "string",
          "securityType": "{'ConfidentialVM_DiskEncryptedWithCustomerKey' | 'ConfidentialVM_DiskEncryptedWithPlatformKey' | 'ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey' |'TrustedLaunch'}"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = {
  properties: {
    securityProfile: {
      secureVMDiskEncryptionSetId: 'string'
      securityType: '{ConfidentialVM_DiskEncryptedWithCustomerKey | ConfidentialVM_DiskEncryptedWithPlatformKey | ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey | TrustedLaunch}'
    }
  }
}

For Microsoft.Compute/virtualMachines:

Enabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "securityProfile": {
          "encryptionAtHost": true
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    securityProfile: {
      encryptionAtHost: true
    }
  }
}

Enabled encryption for managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "dataDisks": [
            {
              "id": "myDiskId",
              "managedDisk": {
                "diskEncryptionSet": {
                  "id": "string"
                }
              }
            }
          ]
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      dataDisks: [
        {
          name: 'myDisk'
          managedDisk: {
            diskEncryptionSet: {
              id: 'string'
            }
          }
        }
      ]
    }
  }
}

Enabled encryption for managed disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "dataDisks": [
            {
              "id": "myDiskId",
              "managedDisk": {
                "securityProfile": {
                  "diskEncryptionSet": {
                    "id": "string"
                  }
                }
              }
            }
          ]
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      dataDisks: [
        {
          name: 'myDisk'
          managedDisk: {
            securityProfile: {
              diskEncryptionSet: {
                id: 'string'
              }
            }
          }
        }
      ]
    }
  }
}

Enabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "encryptionSettings": {
              "enabled": true,
              "diskEncryptionKey": {
                "secretUrl": "string",
                "sourceVault": {
                  "id": "string"
                }
              }
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        encryptionSettings: {
          enabled: true
          diskEncryptionKey: {
            secretUrl: 'string'
            sourceVault: {
              id: 'string'
            }
          }
        }
      }
    }
  }
}

Enabled encryption for OS managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "managedDisk": {
              "id": "myDiskId",
              "diskEncryptionSet": {
                "id": "string"
              }
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        managedDisk: {
          id: 'myDiskId'
          diskEncryptionSet: {
            id: 'string'
          }
        }
      }
    }
  }
}

Enabled encryption for OS managed disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "managedDisk": {
              "securityProfile": {
                "diskEncryptionSet": {
                  "id": "string"
                }
              }
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        managedDisk: {
          id: 'myDiskId'
          securityProfile: {
            diskEncryptionSet: {
              id: 'string'
            }
          }
        }
      }
    }
  }
}

For Microsoft.Compute/virtualMachineScaleSets:

Enabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "securityProfile": {
            "encryptionAtHost": true
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      securityProfile: {
        encryptionAtHost: true
      }
    }
  }
}

Enabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "dataDisks": [
              {
                "name": "myDataDisk",
                "managedDisk": {
                  "diskEncryptionSet": {
                    "id": "string"
                  }
                }
              }
            ]
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        dataDisks: [
          {
            name: 'myDataDisk'
            managedDisk: {
              diskEncryptionSet: {
                id: 'string'
              }
            }
          }
        ]
      }
    }
  }
}

Enabled encryption for data disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "dataDisks": [
              {
                "name": "myDataDisk",
                "managedDisk": {
                  "securityProfile": {
                    "diskEncryptionSet": {
                      "id": "string"
                    }
                  }
                }
              }
            ]
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        dataDisks: [
          {
            name: 'myDataDisk'
            managedDisk: {
              securityProfile: {
                diskEncryptionSet: {
                  id: 'string'
                }
              }
            }
          }
        ]
      }
    }
  }
}

Enabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "name": "myOsDisk",
              "managedDisk": {
                "diskEncryptionSet": {
                  "id": "string"
                }
              }
            }
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        osDisk: {
          name: 'myOsDisk'
          managedDisk: {
            diskEncryptionSet: {
              id: 'string'
            }
          }
        }
      }
    }
  }
}

Enabled encryption for OS disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "name": "myOsDisk",
              "managedDisk": {
                "securityProfile": {
                  "diskEncryptionSet": {
                    "id": "string"
                  }
                }
              }
            }
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        osDisk: {
          name: 'myOsDisk'
          managedDisk: {
            securityProfile: {
              diskEncryptionSet: {
                id: 'string'
              }
            }
          }
        }
      }
    }
  }
}

For Microsoft.ContainerService/managedClusters:

Enabled encryption at host and set the disk encryption set ID:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-02-preview",
      "properties": {
        "agentPoolProfiles": [
          {
            "enableEncryptionAtHost": true
          }
        ],
        "diskEncryptionSetID": "string"
      }
    }
  ]
}
resource symbolicname 'Microsoft.ContainerService/managedClusters@2023-03-02-preview' = {
  properties: {
    agentPoolProfiles: [
      {
        enableEncryptionAtHost: true
      }
    ]
    diskEncryptionSetID: 'string'
  }
}

For Microsoft.DataLakeStore/accounts:

Enabled encryption for Data Lake Store:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DataLakeStore/accounts",
      "apiVersion": "2016-11-01",
      "properties": {
        "encryptionState": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DataLakeStore/accounts@2016-11-01' = {
  properties: {
    encryptionState: 'Enabled'
  }
}

For Microsoft.DBforMySQL/servers:

Enabled infrastructure double encryption for MySQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    infrastructureEncryption: 'Enabled'
  }
}

For Microsoft.DBforPostgreSQL/servers:

Enabled infrastructure double encryption for PostgreSQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DBforPostgreSQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforPostgreSQL/servers@2017-12-01' = {
  properties: {
    infrastructureEncryption: 'Enabled'
  }
}

For Microsoft.DocumentDB/cassandraClusters/dataCenters:

Enabled encryption for a Cassandra Cluster datacenter’s managed disk and backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "cassandraClusters/example",
      "type": "Microsoft.DocumentDB/cassandraClusters/dataCenters",
      "apiVersion": "2023-04-15",
      "properties": {
        "diskCapacity": 4,
        "backupStorageCustomerKeyUri": "string",
        "managedDiskCustomerKeyUri": "string"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DocumentDB/cassandraClusters/dataCenters@2023-04-15' = {
  name: 'string'
  parent: parent
  properties: {
    diskCapacity: 4
    backupStorageCustomerKeyUri: 'string'
    managedDiskCustomerKeyUri: 'string'
  }
}

For Microsoft.HDInsight/clusters:

Enabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": true
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = {
  properties: {
    computeProfile: {
      roles: [
        {
          encryptDataDisks: true
        }
      ]
    }
  }
}

Enabled encryption for data disk at application level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "clusters/example",
      "type": "Microsoft.HDInsight/clusters/applications",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": true
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters/applications@2021-06-01' = {
  properties: {
    computeProfile: {
      roles: [
        {
          encryptDataDisks: true
        }
      ]
    }
  }
}

Enabled encryption for resource disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "diskEncryptionProperties": {
          "encryptionAtHost": true
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = {
  properties: {
    diskEncryptionProperties: {
      encryptionAtHost: true
    }
  }
}

For Microsoft.Kusto/clusters:

Enabled encryption for disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Kusto/clusters",
      "apiVersion": "2022-12-29",
      "properties": {
        "enableDiskEncryption": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Kusto/clusters@2022-12-29' = {
  properties: {
    enableDiskEncryption: true
  }
}

For Microsoft.RecoveryServices/vaults:

Enabled encryption on infrastructure:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "properties": {
        "encryption": {
          "infrastructureEncryption": "Enabled"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.RecoveryServices/vaults@2023-01-01' = {
  properties: {
    encryption: {
      infrastructureEncryption: 'Enabled'
    }
  }
}

Enabled encryption on infastructure for backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "vaults/example",
      "type": "Microsoft.RecoveryServices/vaults/backupEncryptionConfigs",
      "apiVersion": "2023-01-01",
      "properties": {
        "encryptionAtRestType": "{'CustomerManaged' | 'MicrosoftManaged'}",
        "infrastructureEncryptionState": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.RecoveryServices/vaults/backupEncryptionConfigs@2023-01-01' = {
  properties: {
    encryptionAtRestType: '{CustomerManaged | MicrosoftManaged}'
    infrastructureEncryptionState: 'Enabled'
  }
}

For Microsoft.RedHatOpenShift/openShiftClusters:

Enabled disk encryption for master profile and worker profiles:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.RedHatOpenShift/openShiftClusters",
      "apiVersion": "2022-09-04",
      "properties": {
        "masterProfile": {
          "diskEncryptionSetId": "string",
          "encryptionAtHost": "Enabled"
        },
        "workerProfiles": [
          {
            "diskEncryptionSetId": "string",
            "encryptionAtHost": "Enabled"
          }
        ]
      }
    }
  ]
}
resource symbolicname 'Microsoft.RedHatOpenShift/openShiftClusters@2022-09-04' = {
  properties: {
    masterProfile: {
      diskEncryptionSetId: 'string'
      encryptionAtHost: 'Enabled'
    }
    workerProfiles: [
      {
        diskEncryptionSetId: 'string'
        encryptionAtHost: 'Enabled'
      }
    ]
  }
}

For Microsoft.SqlVirtualMachine/sqlVirtualMachines:

Enabled encryption for SQL Virtual Machine:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines",
      "apiVersion": "2022-08-01-preview",
      "properties": {
        "autoBackupSettings": {
          "enableEncryption": true,
          "password": "string"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.SqlVirtualMachine/sqlVirtualMachines@2022-08-01-preview' = {
  properties: {
    autoBackupSettings: {
      enableEncryption: true
      password: 'string'
    }
  }
}

For Microsoft.Storage/storageAccounts:

Enabled enforcing of infrastructure encryption for double encryption of data:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2022-09-01",
      "properties": {
        "encryption": {
          "requireInfrastructureEncryption": true
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    encryption: {
      requireInfrastructureEncryption: true
    }
  }
}

For Microsoft.Storage/storageAccounts/encryptionScopes:

Enabled enforcing of infrastructure encryption for double encryption of data at encryption scope level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "storageAccounts/example",
      "type": "Microsoft.Storage/storageAccounts/encryptionScopes",
      "apiVersion": "2022-09-01",
      "properties": {
        "requireInfrastructureEncryption": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts/encryptionScopes@2022-09-01' = {
  properties: {
    requireInfrastructureEncryption: true
  }
}

See

azureresourcemanager:S6321

Why is this an issue?

Cloud platforms such as Azure support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "networkSecurityGroups/example",
      "type": "Microsoft.Network/networkSecurityGroups/securityRules",
      "apiVersion": "2022-11-01",
      "properties": {
        "protocol": "*",
        "destinationPortRange": "*",
        "sourceAddressPrefix": "*",
        "access": "Allow",
        "direction": "Inbound"
      }
    }
  ]
}
resource securityRules 'Microsoft.Network/networkSecurityGroups/securityRules@2022-11-01' = {
  name: 'securityRules'
  properties: {
    direction: 'Inbound'
    access: 'Allow'
    protocol: '*'
    destinationPortRange: '*'
    sourceAddressPrefix: '*'
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "networkSecurityGroups/example",
      "type": "Microsoft.Network/networkSecurityGroups/securityRules",
      "apiVersion": "2022-11-01",
      "properties": {
          "protocol": "*",
          "destinationPortRange": "22",
          "sourceAddressPrefix": "10.0.0.0/24",
          "access": "Allow",
          "direction": "Inbound"
      }
    }
  ]
}
resource securityRules 'Microsoft.Network/networkSecurityGroups/securityRules@2022-11-01' = {
  name: 'securityRules'
  properties: {
    direction: 'Inbound'
    access: 'Allow'
    protocol: '*'
    destinationPortRange: '22'
    sourceAddressPrefix: '10.0.0.0/24'
  }
}

Resources

Documentation

Standards

azureresourcemanager:S6364

Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident.

Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident.

Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be backed up for a specific amount of time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident.

Sensitive Code Example

For Azure App Service:

resource webApp 'Microsoft.Web/sites@2022-03-01' = {
  name: 'webApp'
}

resource backup 'config@2022-03-01' = {
  name: 'backup'
  parent: webApp
  properties: {
    backupSchedule: {
      frequencyInterval: 1
      frequencyUnit: 'Day'
      keepAtLeastOneBackup: true
      retentionPeriodInDays: 2  // Sensitive
    }
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2022-03-01",
      "name": "webApp",
    },
    {
      "type": "Microsoft.Web/sites/config",
      "apiVersion": "2022-03-01",
      "name": "webApp/backup",
      "properties": {
        "backupSchedule": {
          "frequencyInterval": 1,
          "frequencyUnit": "Day",
          "keepAtLeastOneBackup": true,
          "retentionPeriodInDays": 2
        }
      },
      "dependsOn": [
        "[resourceId('Microsoft.Web/sites', 'webApp')]"
      ]
    }
  ]
}

For Azure Cosmos DB accounts:

resource cosmosDb 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = {
    properties: {
        backupPolicy: {
            type: 'Periodic'
            periodicModeProperties: {
                backupIntervalInMinutes: 1440
                backupRetentionIntervalInHours: 8  // Sensitive
            }
        }
    }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DocumentDB/databaseAccounts",
      "apiVersion": "2023-04-15",
      "properties": {
        "backupPolicy": {
          "type": "Periodic",
          "periodicModeProperties": {
            "backupIntervalInMinutes": 1440,
            "backupRetentionIntervalInHours": 8
          }
        }
      }
    }
  ]
}

For Azure Backup vault policies:

resource vault 'Microsoft.RecoveryServices/vaults@2023-01-01' = {
    name: 'testVault'

    resource backupPolicy 'backupPolicies@2023-01-01' = {
        name: 'backupPolicy'
        properties: {
            backupManagementType: 'AzureSql'
            retentionPolicy: {
                retentionPolicyType: 'SimpleRetentionPolicy'
                retentionDuration: {
                    count: 2  // Sensitive
                    durationType: 'Days'
                }
            }
        }
    }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "name": "testVault",
      "resources": [
        {
          "type": "backupPolicies",
          "apiVersion": "2023-01-01",
          "name": "testVault/backupPolicy",
          "properties": {
            "backupManagementType": "AzureSql",
            "retentionPolicy": {
              "retentionPolicyType": "SimpleRetentionPolicy",
              "retentionDuration": {
                "count": 2,
                "durationType": "Days"
              }
            }
          }
        }
      ]
    }
  ]
}

Compliant Solution

For Azure App Service:

resource webApp 'Microsoft.Web/sites@2022-03-01' = {
  name: 'webApp'
}

resource backup 'config@2022-03-01' = {
  name: 'backup'
  parent: webApp
  properties: {
    backupSchedule: {
      frequencyInterval: 1
      frequencyUnit: 'Day'
      keepAtLeastOneBackup: true
      retentionPeriodInDays: 8
    }
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2022-03-01",
      "name": "webApp",
    },
    {
      "type": "Microsoft.Web/sites/config",
      "apiVersion": "2022-03-01",
      "name": "webApp/backup",
      "properties": {
        "backupSchedule": {
          "frequencyInterval": 1,
          "frequencyUnit": "Day",
          "keepAtLeastOneBackup": true,
          "retentionPeriodInDays": 30
        }
      },
      "dependsOn": [
        "[resourceId('Microsoft.Web/sites', 'webApp')]"
      ]
    }
  ]
}

For Azure Cosmos DB accounts:

resource cosmosDb 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = {
    properties: {
        backupPolicy: {
            type: 'Periodic'
            periodicModeProperties: {
                backupIntervalInMinutes: 1440
                backupRetentionIntervalInHours: 192
            }
        }
    }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DocumentDB/databaseAccounts",
      "apiVersion": "2023-04-15",
      "properties": {
        "backupPolicy": {
          "type": "Periodic",
          "periodicModeProperties": {
            "backupIntervalInMinutes": 1440,
            "backupRetentionIntervalInHours": 720
          }
        }
      }
    }
  ]
}

For Azure Backup vault policies:

resource vault 'Microsoft.RecoveryServices/vaults@2023-01-01' = {
    name: 'testVault'

    resource backupPolicy 'backupPolicies@2023-01-01' = {
        name: 'backupPolicy'
        properties: {
            backupManagementType: 'AzureSql'
            retentionPolicy: {
                retentionPolicyType: 'SimpleRetentionPolicy'
                retentionDuration: {
                    count: 8
                    durationType: 'Days'
                }
            }
        }
    }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "name": "testVault",
      "resources": [
        {
          "type": "backupPolicies",
          "apiVersion": "2023-01-01",
          "name": "testVault/backupPolicy",
          "properties": {
            "backupManagementType": "AzureSql",
            "retentionPolicy": {
              "retentionPolicyType": "SimpleRetentionPolicy",
              "retentionDuration": {
                "count": 30,
                "durationType": "Days"
              }
            }
          }
        }
      ]
    }
  ]
}
azureresourcemanager:S6379

Enabling Azure resource-specific admin accounts can reduce an organization’s ability to protect itself against account or service account thefts.

Full Administrator permissions fail to correctly separate duties and create potentially critical attack vectors on the impacted resources.

In case of abuse of elevated permissions, both the data on which impacted resources operate and their access traceability are at risk.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require this resource to disable its administrative accounts or permissions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Disable the administrative accounts or permissions in this Azure resource.

Sensitive Code Example

For Azure Batch Pools:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Batch/batchAccounts/pools",
      "apiVersion": "2022-10-01",
      "properties": {
        "startTask": {
          "userIdentity": {
            "autoUser": {
              "elevationLevel": "Admin"
            }
          }
        }
      }
    }
  ]
}
resource AdminBatchPool 'Microsoft.Batch/batchAccounts/pools@2022-10-01' = {
  properties: {
    startTask: {
      userIdentity: {
        autoUser: {
          elevationLevel: 'Admin' // Sensitive
        }
      }
    }
  }
}

For Azure Container Registries:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerRegistry/registries",
      "apiVersion": "2023-01-01-preview",
      "properties": {
        "adminUserEnabled": true
      }
    }
  ]
}
resource acrAdminUserDisabled 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
  properties: {
    adminUserEnabled: true // Sensitive
  }
}

Compliant Solution

For Azure Batch Pools:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Batch/batchAccounts/pools",
      "apiVersion": "2022-10-01",
      "properties": {
        "startTask": {
          "userIdentity": {
            "autoUser": {
              "elevationLevel": "NonAdmin"
            }
          }
        }
      }
    }
  ]
}
resource AdminBatchPool 'Microsoft.Batch/batchAccounts/pools@2022-10-01' = {
  properties: {
    startTask: {
      userIdentity: {
        autoUser: {
          elevationLevel: 'NonAdmin'
        }
      }
    }
  }
}

For Azure Container Registries:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerRegistry/registries",
      "apiVersion": "2023-01-01-preview",
      "properties": {
        "adminUserEnabled": false
      }
    }
  ]
}
resource acrAdminUserDisabled 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
  properties: {
    adminUserEnabled: false
  }
}

See

azureresourcemanager:S6380

Allowing anonymous access can reduce an organization’s ability to protect itself against attacks on its Azure resources.

Security incidents may include disrupting critical functions, data theft, and additional Azure subscription costs due to resource overload.

Using authentication coupled with fine-grained authorizations helps bring defense-in-depth and bring traceability to investigators of security incidents.

Depending on the affected Azure resource, multiple authentication choices are possible: Active Directory Authentication, OpenID implementations (Google, Microsoft, etc.) or native Azure mechanisms.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • This Azure resource stores or processes sensitive data.
  • Compliance policies require access to this resource to be authenticated.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Enable authentication in this Azure resource, and disable anonymous access.

If only Basic Authentication is available, enable it.

Sensitive Code Example

For App Service:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "apiVersion": "2022-03-01",
            "name": "example"
        }
    ]
}
resource appService 'Microsoft.Web/sites@2022-09-01' = {
    name: 'example'
    // Sensitive: no authentication defined
}

For API Management:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "example"
        }
    ]
}
resource apiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = {
    name: 'example'
    // Sensitive: no portal authentication defined

    resource apis 'apis@2022-09-01-preview' = {
        name: 'exampleApi'
        properties: {
            path: '/test'
            // Sensitive: no API authentication defined
        }
    }
}

For Data Factory Linked Services:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.DataFactory/factories/linkedservices",
            "apiVersion": "2018-06-01",
            "name": "example",
            "properties": {
                "type": "Web",
                "typeProperties": {
                    "authenticationType": "Anonymous"
                }
            }
        }
    ]
}
resource linkedService 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
    name: 'example'
    properties: {
        type: 'Web'
        typeProperties: {
            authenticationType: 'Anonymous' // Sensitive
        }
    }
}

For Storage Accounts and Storage Containers:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "properties": {
                "allowBlobPublicAccess": true
            }
        }
    ]
}
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
    name: 'example'
    properties: {
        allowBlobPublicAccess: true // Sensitive
    }
}
{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "resources": [
                {
                    "type": "blobServices/containers",
                    "apiVersion": "2022-09-01",
                    "name": "blobContainerExample",
                    "properties": {
                        "publicAccess": "Blob"
                    }
                }
            ]
        }
    ]
}
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
    name: 'example'

    resource blobService 'blobServices@2022-09-01' = {
        name: 'default'

        resource containers 'containers@2022-09-01' = {
            name: 'exampleContainer'
            properties: {
                publicAccess: 'Blob' // Sensitive
            }
        }
    }
}

For Redis Caches:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Cache/redis",
            "apiVersion": "2022-06-01",
            "name": "example",
            "properties": {
                "redisConfiguration": {
                    "authnotrequired": "true"
                }
            }
        }
    ]
}
resource redisCache 'Microsoft.Cache/redis@2023-04-01' = {
    name: 'example'
    location: location
    properties: {
        redisConfiguration: {
            authnotrequired: 'true' // Sensitive
        }
    }
}

Compliant Solution

For App Services and equivalent:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "apiVersion": "2022-03-01",
            "name": "example",
            "resources": [
                {
                    "type": "config",
                    "apiVersion": "2022-03-01",
                    "name": "authsettingsV2",
                    "properties": {
                        "globalValidation": {
                            "requireAuthentication": true,
                            "unauthenticatedClientAction": "RedirectToLoginPage"
                        }
                    }
                }
            ]
        }
    ]
}
resource appService 'Microsoft.Web/sites@2022-09-01' = {
    name: 'example'

    resource authSettings 'config@2022-09-01' = { // Compliant
        name: 'authsettingsV2'
        properties: {
            globalValidation: {
                requireAuthentication: true
                unauthenticatedClientAction: 'AllowAnonymous'
            }
            platform: {
                enabled: true
            }
        }
    }
}

For API Management:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "example",
            "resources": [
                {
                    "type": "portalsettings",
                    "apiVersion": "2022-09-01-preview",
                    "name": "signin",
                    "properties": {
                        "enabled": true
                    }
                },
                {
                    "type": "apis",
                    "apiVersion": "2022-09-01-preview",
                    "name": "exampleApi",
                    "properties": {
                        "authenticationSettings": {
                            "openid": {
                                "bearerTokenSendingMethods": ["authorizationHeader"],
                                "openidProviderId": "<an OpenID provider ID>"
                            }
                        }
                    }
                }
            ]
        }
    ]
}
resource apiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = {
    name: 'example'

    resource portalSettings 'portalsettings@2022-09-01-preview' = {
        name: 'signin'
        properties: {
            enabled: true // Compliant: Sign-in is enabled for portal access
        }
    }

    resource apis 'apis@2022-09-01-preview' = {
        name: 'exampleApi'
        properties: {
            path: '/test'
            authenticationSettings: { // Compliant: API has authentication enabled
                openid: {
                    bearerTokenSendingMethods: ['authorizationHeader']
                    openidProviderId: '<an OpenID provider ID>'
                }
            }
        }
    }
}

For Data Factory Linked Services:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.DataFactory/factories/linkedservices",
            "apiVersion": "2018-06-01",
            "name": "example",
            "properties": {
                "type": "Web",
                "typeProperties": {
                    "authenticationType": "Basic"
                }
            }
        }
    ]
}
@secure()
@description('The password for authentication')
param password string

resource linkedService 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
    name: 'example'
    properties: {
        type: 'Web'
        typeProperties: {
            authenticationType: 'Basic' // Compliant
            username: 'test'
            password: {
                type: 'SecureString'
                value: password
            }
        }
    }
}

For Storage Accounts:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "properties": {
                "allowBlobPublicAccess": false
            }
        }
    ]
}
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  name: 'example'
  properties: {
    allowBlobPublicAccess: false // Compliant
  }
}
{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "resources": [
                {
                    "type": "blobServices/containers",
                    "apiVersion": "2022-09-01",
                    "name": "blobContainerExample",
                    "properties": {
                        "publicAccess": "None"
                    }
                }
            ]
        }
    ]
}
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
    name: 'example'

    resource blobService 'blobServices@2022-09-01' = {
        name: 'default'

        resource containers 'containers@2022-09-01' = {
            name: 'exampleContainer'
            properties: {
                publicAccess: 'None' // Compliant
            }
        }
    }
}

For Redis Caches:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Cache/redis",
            "apiVersion": "2022-06-01",
            "name": "example",
            "properties": {
                "redisConfiguration": {}
            }
        }
    ]
}
resource redisCache 'Microsoft.Cache/redis@2023-04-01' = {
    name: 'example'
    location: location
    properties: {
        redisConfiguration: {
            // Compliant: authentication is enabled by default
        }
    }
}

See

azureresourcemanager:S6381

Azure Resource Manager offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.

An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.

This rule raises an issue when one of the following roles is assigned:

  • Contributor (b24988ac-6180-42a0-ab88-20f7382dd24c)
  • Owner (8e3af657-a8ff-443c-a75c-2fe8c4bcb635)
  • User Access Administrator (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9)

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of this extensive set of permissions to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split permissions between multiple users, but it’s not enforced.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the assignment of Owner roles to less than 3 people or service principals.
  • Apply the least privilege principle by choosing a role with a limited set of permissions.
  • If no built-in role meets your needs, create a custom role with as few permissions as possible.

Sensitive Code Example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "properties": {
        "description": "Assign the contributor role",
        "principalId": "string",
        "principalType": "ServicePrincipal",
        "roleDefinitionId": "[resourceId('Microsoft.Authorization/roleDefinitions', 'b24988ac-6180-42a0-ab88-20f7382dd24c')]"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  scope: tenant()
  properties: {
    description: 'Assign the contributor role'
    principalId: 'string'
    principalType: 'ServicePrincipal'
    roleDefinitionId: resourceId('Microsoft.Authorization/roleAssignments', 'b24988ac-6180-42a0-ab88-20f7382dd24c') // Sensitive
  }
}

Compliant Solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "properties": {
        "description": "Assign the reader role",
        "principalId": "string",
        "principalType": "ServicePrincipal",
        "roleDefinitionId": "[resourceId('Microsoft.Authorization/roleDefinitions', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')]"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  scope: tenant()
  properties: {
    description: 'Assign the reader role'
    principalId: 'string'
    principalType: 'ServicePrincipal'
    roleDefinitionId: resourceId('Microsoft.Authorization/roleAssignments', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')
  }
}

See

azureresourcemanager:S6385

Defining a custom role at the Subscription or Management group scope that allows all actions will give it the same capabilities as the built-in Owner role.

Why is this an issue?

In Azure, the Owner role of a Subscription or a Management group provides entities it is assigned to with the maximum level of privileges. The Owner role allows managing all resources and assigning any role to other entities.

Because it is a powerful entitlement, it should be granted to as few users as possible.

When a custom role has the same level of permissions as the Owner one, there are greater chances that high privileges are granted to too many users.

What is the potential impact?

Custom roles that provide the same level of permissions as Owner might indicate a configuration issue. Any entity assigned with it can perform any action on the Subscription or Management group, including changing roles and permissions.

If the affected role is unexpectedly assigned to users, they can compromise the affected scope. They can do so in the long term by assigning dangerous roles to other users or entities.

Depending on the scope to which the role is assignable, the exact impact of a successful exploitation may vary. It generally ranges from data compromise to the takeover of the cloud infrastructure.

Infrastructure takeover

By obtaining the right role, an attacker can gain control over part or all of the Azure infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining.

This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions.

Furthermore, corporate Azure infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data, and to cause more damage to the overall infrastructure.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers with the correct role could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Financial loss

Financial losses can occur when a malicious user is able to use a paid third-party-provided service. Each users assigned with a bad role will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use will lead to added costs with the Azure service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected environment. This might result in a partial denial of service for all legitimate users.

How to fix it

To reduce the risk of intrusion of a compromised owner, it is recommended to limit the number of subscription owners.

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleDefinitions",
      "apiVersion": "2022-04-01",
      "properties": {
        "permissions": [
          {
            "actions": ["*"],
            "notActions": []
          }
        ],
        "assignableScopes": [
          "[subscription().id]"
        ]
      }
    }
  ]
}
targetScope = 'managementGroup'

resource roleDef 'Microsoft.Authorization/roleDefinitions@2022-04-01' = { // Sensitive
  properties: {
    permissions: [
      {
        actions: ['*']
        notActions: []
      }
    ]

    assignableScopes: [
      managementGroup().id
    ]
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleDefinitions",
      "apiVersion": "2022-04-01",
      "properties": {
        "permissions": [
          {
            "actions": ["Microsoft.Compute/*"],
            "notActions": []
          }
        ],
        "assignableScopes": [
          "[subscription().id]"
        ]
      }
    }
  ]
}
targetScope = 'managementGroup'

resource roleDef 'Microsoft.Authorization/roleDefinitions@2022-04-01' = {
  properties: {
    permissions: [
      {
        actions: ['Microsoft.Compute/*']
        notActions: []
      }
    ]

    assignableScopes: [
      managementGroup().id
    ]
  }
}

Going the extra mile

Here is a list of recommendations that can be followed regarding good usage of roles:

  • Apply the least privilege principle by creating a custom role with as few permissions as possible.
  • As custom roles can be updated, gradually add atomic permissions when required.
  • Limit the assignable scopes of the custom role to a set of Resources or Resource Groups.
  • When necessary, use the built-in Owner role instead of a custom role granting subscription owner capabilities.
  • Limit the assignments of Owner roles to less than three people or service principals.

Resources

Documentation

Standards

azureresourcemanager:S6387

Azure RBAC roles can be assigned to users, groups, or service principals. A role assignment grants permissions on a predefined set of resources called "scope".

The widest scopes a role can be assigned to are:

  • Subscription: a role assigned with this scope grants access to all resources of this Subscription.
  • Management Group: a scope assigned with this scope grants access to all resources of all the Subscriptions in this Management Group.

In case of security incidents involving a compromised identity (user, group, or service principal), limiting its role assignment to the narrowest scope possible helps separate duties and limits what resources are at risk.

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of the resources in the scope to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split the scope into multiple role assignments with a narrower scope.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the scope of the role assignment to a Resource or Resource Group.
  • Apply the least privilege principle by assigning roles granting as few permissions as possible.

Sensitive Code Example

targetScope = 'subscription' // Sensitive

resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(subscription().id, 'exampleRoleAssignment')
}
{
  "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "name": "[guid(subscription().id, 'exampleRoleAssignment')]"
    }
  ]
}

Compliant Solution

targetScope = 'resourceGroup'

resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(resourceGroup().id, 'exampleRoleAssignment')
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "name": "[guid(resourceGroup().id, 'exampleRoleAssignment')]"
    }
  ]
}

See

azureresourcemanager:S6413

Defining a short log retention duration can reduce an organization’s ability to backtrace the actions of malicious actors in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require traceability for a longer duration.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the log retention period to an amount of time sufficient enough to be able to investigate and restore service in case of an incident.

Sensitive Code Example

For Azure Firewall Policy:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Network/firewallPolicies",
      "apiVersion": "2022-07-01",
      "properties": {
        "insights": {
          "isEnabled": true,
          "retentionDays": 7
        }
      }
    }
  ]
}
resource firewallPolicy 'Microsoft.Network/firewallPolicies@2022-07-01' = {
  properties: {
    insights: {
      isEnabled: true
      retentionDays: 7  // Sensitive
    }
  }
}

For Microsoft Network Network Watchers Flow Logs:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "networkWatchers/example",
      "type": "Microsoft.Network/networkWatchers/flowLogs",
      "apiVersion": "2022-07-01",
      "properties": {
        "retentionPolicy": {
          "days": 7,
          "enabled": true
        }
      }
    }
  ]
}
resource networkWatchersFlowLogs 'Microsoft.Network/networkWatchers/flowLogs@2022-07-01' = {
  properties: {
    retentionPolicy: {
      days: 7
      enabled: true
    }
  }
}

For Microsoft SQL Servers Auditing Settings:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example/default",
      "type": "Microsoft.Sql/servers/auditingSettings",
      "apiVersion": "2021-11-01",
      "properties": {
        "retentionDays": 7,
        "state": "Enabled"
      }
    }
  ]
}
resource sqlServerAudit 'Microsoft.Sql/servers/auditingSettings@2021-11-01' = {
  properties: {
    retentionDays: 7    // Sensitive
  }
}

This rule also applies to log retention periods that are too short, on the following resources:

Compliant Solution

For Azure Firewall Policy:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Network/firewallPolicies",
      "apiVersion": "2022-07-01",
      "properties": {
        "insights": {
          "isEnabled": true,
          "retentionDays": 30
        }
      }
    }
  ]
}
resource firewallPolicy 'Microsoft.Network/firewallPolicies@2022-07-01' = {
  properties: {
    insights: {
      isEnabled: true
      retentionDays: 30
    }
  }
}

For Microsoft Network Network Watchers Flow Logs:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "networkWatchers/example",
      "type": "Microsoft.Network/networkWatchers/flowLogs",
      "apiVersion": "2022-07-01",
      "properties": {
        "retentionPolicy": {
          "days": 30,
          "enabled": true
        }
      }
    }
  ]
}
resource networkWatchersFlowLogs 'Microsoft.Network/networkWatchers/flowLogs@2022-07-01' = {
  properties: {
    retentionPolicy: {
      days: 30
      enabled: true
    }
  }
}

For Microsoft SQL Servers Auditing Settings:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example/default",
      "type": "Microsoft.Sql/servers/auditingSettings",
      "apiVersion": "2021-11-01",
      "properties": {
        "retentionDays": 30,
        "state": "Enabled"
      }
    }
  ]
}
resource sqlServerAudit 'Microsoft.Sql/servers/auditingSettings@2021-11-01' = {
  properties: {
    retentionDays: 30
  }
}

Above code also applies to other types defined in previous paragraph.

azureresourcemanager:S6382

Disabling certificate-based authentication can reduce an organization’s ability to react against attacks on its critical functions and data.

Azure offers various authentication options to access resources: Anonymous connections, Basic authentication, password-based authentication, and certificate-based authentication.

Choosing certificate-based authentication helps bring client/host trust by allowing the host to verify the client and vice versa. It cannot be forged or forwarded by a man-in-the-middle eavesdropper, and the certificate’s private key is never sent over the network so it’s harder to steal than a password.

In case of a security incident, certificates help bring investigators traceability and allow security operations teams to react faster. For example, all compromised certificates could be revoked individually, or an issuing certificate could be revoked which causes all the certificates it issued to become untrusted.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be authenticated with certificates.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable certificate-based authentication.

Sensitive Code Example

Where the use of client certificates is controlled by a boolean value, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.SignalRService/webPubSub",
      "apiVersion": "2020-07-01-preview",
      "name": "example",
      "properties": {
        "tls": {
          "clientCertEnabled": false
        }
      }
    }
  ]
}
resource example 'Microsoft.SignalRService/webPubSub@2020-07-01-preview' = {
  name: 'example'
  properties: {
    tls: {
      clientCertEnabled: false // Sensitive
    }
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": false
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: false // Sensitive
  }
}

Where the use of client certificates can be made optional, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": true,
        "clientCertMode": "Optional"
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: true
    clientCertMode: 'Optional' // Sensitive
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.App/containerApps",
      "apiVersion": "2022-10-01",
      "name": "example",
      "properties": {
        "configuration": {
          "ingress": {
            "clientCertificateMode": "accept"
          }
        }
      }
    }
  ]
}
resource example 'Microsoft.App/containerApps@2022-10-01' = {
  name: 'example'
  properties: {
    configuration: {
      ingress: {
        clientCertificateMode: 'accept' // Sensitive
      }
    }
  }
}

Where client certificates can be used to authenticate outbound requests, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DataFactory/factories/linkedservices",
      "apiVersion": "2018-06-01",
      "name": "factories/example",
      "properties": {
        "type": "Web",
        "typeProperties": {
          "authenticationType": "Basic"
        }
      }
    }
  ]
}
resource example 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
  name: 'example'
  properties: {
    type: 'Web'
    typeProperties: {
      authenticationType: 'Basic' // Sensitive
    }
  }
}

Where a list of permitted client certificates must be provided, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/cassandraClusters",
      "apiVersion": "2021-10-15",
      "name": "example",
      "properties": {
        "clientCertificates": []
      }
    }
  ]
}
resource example 'Microsoft.DocumentDB/cassandraClusters@2021-10-15' = {
  name: 'example'
  properties: {
    clientCertificates: [] // Sensitive
  }
}

Where a resouce can use both certificate-based and password-based authentication, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerRegistry/registries/tokens",
      "apiVersion": "2022-12-01",
      "name": "registries/example",
      "properties": {
        "credentials": {
          "passwords": [
            {
              "name": "password1"
            }
          ]
        }
      }
    }
  ]
}
resource example 'Microsoft.ContainerRegistry/registries/tokens@2022-12-01' = {
  name: 'example'
  properties: {
    credentials: {
      passwords: [ // Sensitive
        {
          name: 'password1'
        }
      ]
    }
  }
}

Compliant Solution

Where the use of client certificates is controlled by a boolean value:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.SignalRService/webPubSub",
      "apiVersion": "2020-07-01-preview",
      "name": "example",
      "properties": {
        "tls": {
          "clientCertEnabled": true
        }
      }
    }
  ]
}
resource example 'Microsoft.SignalRService/webPubSub@2020-07-01-preview' = {
  name: 'example'
  properties: {
    tls: {
      clientCertEnabled: true
    }
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": true,
        "clientCertMode": "Required"
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: true
    clientCertMode: 'Required'
  }
}

Where the use of client certificates can be made optional:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": true,
        "clientCertMode": "Required"
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: true
    clientCertMode: 'Required'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.App/containerApps",
      "apiVersion": "2022-10-01",
      "name": "example",
      "properties": {
        "configuration": {
          "ingress": {
            "clientCertificateMode": "require"
          }
        }
      }
    }
  ]
}
resource example 'Microsoft.App/containerApps@2022-10-01' = {
  name: 'example'
  properties: {
    configuration: {
      ingress: {
        clientCertificateMode: 'require'
      }
    }
  }
}

Where client certificates can be used to authenticate outbound requests:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DataFactory/factories/linkedservices",
      "apiVersion": "2018-06-01",
      "name": "example",
      "properties": {
        "type": "Web",
        "typeProperties": {
          "authenticationType": "ClientCertificate"
        }
      }
    }
  ]
}
resource example 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
  name: 'example'
  properties: {
    type: 'Web'
    typeProperties: {
      authenticationType: 'ClientCertificate'
    }
  }
}

Where a list of permitted client certificates must be provided:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/cassandraClusters",
      "apiVersion": "2021-10-15",
      "name": "example",
      "properties": {
        "clientCertificates": [
          {
            "pem": "[base64-encoded certificate]"
          }
        ]
      }
    }
  ]
}
resource example 'Microsoft.DocumentDB/cassandraClusters@2021-10-15' = {
  name: 'example'
  properties: {
    clientCertificates: [
      {
        pem: '[base64-encoded certificate]'
      }
    ]
  }
}

Where a resouce can use both certificate-based and password-based authentication:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerRegistry/registries/tokens",
      "apiVersion": "2022-12-01",
      "name": "example",
      "properties": {
        "credentials": {
          "certificates": [
            {
              "name": "certificate1",
              "encodedPemCertificate": "[base64-encoded certificate]"
            }
          ]
        }
      }
    }
  ]
}
resource example 'Microsoft.ContainerRegistry/registries/tokens@2022-12-01' = {
  name: 'example'
  properties: {
    credentials: {
      certificates: [
        {
          name: 'certificate1'
          encodedPemCertificate: '[base64-encoded certificate]'
        }
      ]
    }
  }
}

See

azureresourcemanager:S6383

Disabling Role-Based Access Control (RBAC) on Azure resources can reduce an organization’s ability to protect itself against access controls being compromised.

To be considered safe, access controls must follow the principle of least privilege and correctly segregate duties amongst users. RBAC helps enforce these practices by adapting the organization’s access control needs into explicit role-based policies: It helps keeping access controls maintainable and sustainable.

Furthermore, RBAC allows operations teams to work faster during a security incident. It helps to mitigate account theft or intrusions by quickly shutting down accesses.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be enforced through the use of Role-Based Access Control.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Enable Azure RBAC when the Azure resource supports it.
  • For Kubernetes clusters, enable Azure RBAC if Azure AD integration is supported. Otherwise, use the built-in Kubernetes RBAC.

Sensitive Code Example

For AKS Azure Kubernetes Service:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-01",
      "properties": {
        "aadProfile": {
          "enableAzureRBAC": false
        },
        "enableRBAC": false
      }
    }
  ]
}
resource aks 'Microsoft.ContainerService/managedClusters@2023-03-01' = {
  properties: {
    aadProfile: {
      enableAzureRBAC: false    // Sensitive
    }
    enableRBAC: false           // Sensitive
  }
}

For Key Vault:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.KeyVault/vaults",
      "apiVersion": "2022-07-01",
      "properties": {
        "enableRbacAuthorization": false
      }
    }
  ]
}
resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01' = {
  properties: {
    enableRbacAuthorization: false    // Sensitive
  }
}

Compliant Solution

For AKS Azure Kubernetes Service:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-01",
      "properties": {
        "aadProfile": {
          "enableAzureRBAC": true
        },
        "enableRBAC": true
      }
    }
  ]
}
resource aks 'Microsoft.ContainerService/managedClusters@2023-03-01' = {
  properties: {
    aadProfile: {
      enableAzureRBAC: true     // Compliant
    }
    enableRBAC: true            // Compliant
  }
}

For Key Vault:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.KeyVault/vaults",
      "apiVersion": "2022-07-01",
      "properties": {
        "enableRbacAuthorization": true
      }
    }
  ]
}
resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01' = {
  properties: {
    enableRbacAuthorization: true    // Compliant
  }
}

See

terraform:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

For AWS Kinesis Data Streams server-side encryption:

resource "aws_kinesis_stream" "sensitive_stream" {
    encryption_type = "NONE" # Sensitive
}

For Amazon ElastiCache:

resource "aws_elasticache_replication_group" "example" {
    replication_group_id = "example"
    replication_group_description = "example"
    transit_encryption_enabled = false  # Sensitive
}

For Amazon ECS:

resource "aws_ecs_task_definition" "ecs_task" {
  family = "service"
  container_definitions = file("task-definition.json")

  volume {
    name = "storage"
    efs_volume_configuration {
      file_system_id = aws_efs_file_system.fs.id
      transit_encryption = "DISABLED"  # Sensitive
    }
  }
}

For Amazon OpenSearch domains:

resource "aws_elasticsearch_domain" "example" {
  domain_name = "example"
  domain_endpoint_options {
    enforce_https = false # Sensitive
  }
  node_to_node_encryption {
    enabled = false # Sensitive
  }
}

For Amazon MSK communications between clients and brokers:

resource "aws_msk_cluster" "sensitive_data_cluster" {
    encryption_info {
        encryption_in_transit {
            client_broker = "TLS_PLAINTEXT" # Sensitive
            in_cluster = false # Sensitive
        }
    }
}

For AWS Load Balancer Listeners:

resource "aws_lb_listener" "front_load_balancer" {
  protocol = "HTTP" # Sensitive

  default_action {
    type = "redirect"

    redirect {
      protocol = "HTTP"
    }
  }
}

HTTP protocol is used for GCP Region Backend Services:

resource "google_compute_region_backend_service" "example" {
  name                            = "example-service"
  region                          = "us-central1"
  health_checks                   = [google_compute_region_health_check.region.id]
  connection_draining_timeout_sec = 10
  session_affinity                = "CLIENT_IP"
  load_balancing_scheme           = "EXTERNAL"
  protocol                        = "HTTP" # Sensitive
}

Compliant Solution

For AWS Kinesis Data Streams server-side encryption:

resource "aws_kinesis_stream" "compliant_stream" {
    encryption_type = "KMS"
}

For Amazon ElastiCache:

resource "aws_elasticache_replication_group" "example" {
    replication_group_id = "example"
    replication_group_description = "example"
    transit_encryption_enabled = true
}

For Amazon ECS:

resource "aws_ecs_task_definition" "ecs_task" {
  family = "service"
  container_definitions = file("task-definition.json")

  volume {
    name = "storage"
    efs_volume_configuration {
      file_system_id = aws_efs_file_system.fs.id
      transit_encryption = "ENABLED"
    }
  }
}

For Amazon OpenSearch domains:

resource "aws_elasticsearch_domain" "example" {
  domain_name = "example"
  domain_endpoint_options {
    enforce_https = true
  }
  node_to_node_encryption {
    enabled = true
  }
}

For Amazon MSK communications between clients and brokers, data in transit is encrypted by default, allowing you to omit writing the encryption_in_transit configuration. However, if you need to configure it explicitly, this configuration is compliant:

resource "aws_msk_cluster" "sensitive_data_cluster" {
    encryption_info {
        encryption_in_transit {
            client_broker = "TLS"
            in_cluster = true
        }
    }
}

For AWS Load Balancer Listeners:

resource "aws_lb_listener" "front_load_balancer" {
  protocol = "HTTP"

  default_action {
    type = "redirect"

    redirect {
      protocol = "HTTPS"
    }
  }
}

HTTPS protocol is used for GCP Region Backend Services:

resource "google_compute_region_backend_service" "example" {
  name                            = "example-service"
  region                          = "us-central1"
  health_checks                   = [google_compute_region_health_check.region.id]
  connection_draining_timeout_sec = 10
  session_affinity                = "CLIENT_IP"
  load_balancing_scheme           = "EXTERNAL"
  protocol                        = "HTTPS"
}

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

terraform:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy for AWS that grants all permissions by using the wildcard (*) in the Action property:

resource "aws_iam_policy" "example" {
  name = "noncompliantpolicy"

  policy = jsonencode({
    Version   = "2012-10-17"
    Statement = [
      {
        Action   = [
          "*" # Sensitive
        ]
        Effect   = "Allow"
        Resource = [
          aws_s3_bucket.mybucket.arn
        ]
      }
    ]
  })
}

A customer-managed policy for GCP that grants all permissions by using the actions admin role role property:

resource "google_project_iam_binding" "example" {
  project = "example"
  role    = "roles/owner" # Sensitive

  members = [
    "user:jane@example.com",
  ]
}

Compliant Solution

A customer-managed policy for AWS that grants only the required permissions:

resource "aws_iam_policy" "example" {
  name = "compliantpolicy"

  policy = jsonencode({
    Version   = "2012-10-17"
    Statement = [
      {
        Action   = [
          "s3:GetObject"
        ]
        Effect   = "Allow"
        Resource = [
          aws_s3_bucket.mybucket.arn
        ]
      }
    ]
  })
}

A customer-managed policy for GCP that grants restricted permissions by using the actions admin role role property:

resource "google_project_iam_binding" "example" {
  project = "example"
  role    = "roles/actions.Viewer"

  members = [
    "user:jane@example.com",
  ]
}

See

terraform:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws_db_instance and aws_rds_cluster:

resource "aws_db_instance" "example" {
  storage_encrypted = false # Sensitive, disabled by default
}

resource "aws_rds_cluster" "example" {
  storage_encrypted = false # Sensitive, disabled by default
}

Compliant Solution

For aws_db_instance and aws_rds_cluster:

resource "aws_db_instance" "example" {
  storage_encrypted = true
}

resource "aws_rds_cluster" "example" {
  storage_encrypted = true
}

See

terraform:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

Update permission is granted for all policies using the wildcard (*) in the Resource property:

resource "aws_iam_policy" "noncompliantpolicy" {
  name        = "noncompliantpolicy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "iam:CreatePolicyVersion"
        ]
        Effect   = "Allow"
        Resource = [
          "*" # Sensitive
        ]
      }
    ]
  })
}

Compliant Solution

Restrict update permission to the appropriate subset of policies:

resource "aws_iam_policy" "compliantpolicy" {
  name        = "compliantpolicy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "iam:CreatePolicyVersion"
        ]
        Effect   = "Allow"
        Resource = [
          "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/team1/*"
        ]
      }
    ]
  })
}

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

terraform:S6388

Using unencrypted cloud storages can lead to data exposure. In the case that adversaries gain physical access to the storage medium they are able to access unencrypted information.

Ask Yourself Whether

  • The service contains sensitive information that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt cloud storages that contain sensitive information.

Sensitive Code Example

For azurerm_data_lake_store:

resource "azurerm_data_lake_store" "store" {
  name             = "store"
  encryption_state = "Disabled"  # Sensitive
}

Compliant Solution

For azurerm_data_lake_store:

resource "azurerm_data_lake_store" "store" {
  name             = "store"
  encryption_state = "Enabled"
  encryption_type  = "ServiceManaged"
}

See

terraform:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PublicRead, PublicReadWrite grant respectively "read" and "read and write" privileges to everyone in the world (AllUsers group).
  • AuthenticatedRead grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, ie to grant necessary permissions only to users for their required tasks. In the context of canned ACL, set it to private (the default one) and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users (ie: anyone in the world authenticated or not) have read and write permissions with the public-read-write access control:

resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive
  bucket = "mynoncompliantbucketname"
  acl    = "public-read-write"
}

Compliant Solution

With the private access control (default), only the bucket owner has the read/write permissions on the buckets and its ACL.

resource "aws_s3_bucket" "mycompliantbucket" { # Compliant
  bucket = "mycompliantbucketname"
  acl    = "private"
}

See

terraform:S6308

Amazon Elasticsearch Service (ES) is a managed service to host Elasticsearch instances.

To harden domain (cluster) data in case of unauthorized access, ES provides data-at-rest encryption if the Elasticsearch version is 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, if adversaries gain physical access to the storage medium, they cannot access the data.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt Elasticsearch domains that contain sensitive information.

Encryption and decryption are handled transparently by ES, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_elasticsearch_domain:

resource "aws_elasticsearch_domain" "elasticsearch" {
  encrypt_at_rest {
    enabled = false  # Sensitive, disabled by default
  }
}

Compliant Solution

For aws_elasticsearch_domain:

resource "aws_elasticsearch_domain" "elasticsearch" {
  encrypt_at_rest {
    enabled = true
  }
}

See

terraform:S6380

Allowing anonymous access can reduce an organization’s ability to protect itself against attacks on its Azure resources.

Security incidents may include disrupting critical functions, data theft, and additional Azure subscription costs due to resource overload.

Using authentication coupled with fine-grained authorizations helps bring defense-in-depth and bring traceability to investigators of security incidents.

Depending on the affected Azure resource, multiple authentication choices are possible: Active Directory Authentication, OpenID implementations (Google, Microsoft, etc.) or native Azure mechanisms.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • This Azure resource stores or processes sensitive data.
  • Compliance policies require access to this resource to be authenticated.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Enable authentication in this Azure resource, and disable anonymous access.

If only Basic Authentication is available, enable it.

Sensitive Code Example

For App Services and equivalent:

resource "azurerm_function_app" "example" {
  name = "example"

  auth_settings {
    enabled = false # Sensitive
  }

  auth_settings {
    enabled = true
    unauthenticated_client_action = "AllowAnonymous" # Sensitive
  }
}

For API Management:

resource "azurerm_api_management_api" "example" { # Sensitive, the openid_authentication block is missing
  name = "example-api"
}

resource "azurerm_api_management" "example" {
  sign_in {
    enabled = false # Sensitive
  }
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_sftp" "example" {
  authentication_type = "Anonymous"
}

For Storage Accounts:

resource "azurerm_storage_account" "example" {
  allow_blob_public_access = true # Sensitive
}

resource "azurerm_storage_container" "example" {
  container_access_type = "blob" # Sensitive
}

For Redis Caches:

resource "azurerm_redis_cache" "example" {
  name = "example-cache"

  redis_configuration {
    enable_authentication = false # Sensitive
  }
}

Compliant Solution

For App Services and equivalent:

resource "azurerm_function_app" "example" {
  name = "example"

  auth_settings {
    enabled = true
    unauthenticated_client_action = "RedirectToLoginPage"
  }
}

For API Management:

resource "azurerm_api_management_api" "example" {
  name = "example-api"

  openid_authentication {
    openid_provider_name = azurerm_api_management_openid_connect_provider.example.name
  }
}

resource "azurerm_api_management" "example" {
  sign_in {
    enabled = true
  }
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_sftp" "example" {
  authentication_type = "Basic"
  username            = local.creds.username
  password            = local.creds.password
}

resource "azurerm_data_factory_linked_service_odata" "example" {
  basic_authentication {
    username = local.creds.username
    password = local.creds.password
  }
}

For Storage Accounts:

resource "azurerm_storage_account" "example" {
  allow_blob_public_access = true
}

resource "azurerm_storage_container" "example" {
  container_access_type = "private"
}

For Redis Caches:

resource "azurerm_redis_cache" "example" {
  name = "example-cache"

  redis_configuration {
    enable_authentication = true
  }
}

See

terraform:S6381

Azure Resource Manager offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.

An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.

This rule raises an issue when one of the following roles is assigned:

  • Contributor (b24988ac-6180-42a0-ab88-20f7382dd24c)
  • Owner (8e3af657-a8ff-443c-a75c-2fe8c4bcb635)
  • User Access Administrator (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9)

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of this extensive set of permissions to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split permissions between multiple users, but it’s not enforced.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the assignment of Owner roles to less than 3 people or service principals.
  • Apply the least privilege principle by choosing a role with a limited set of permissions.
  • If no built-in role meets your needs, create a custom role with as few permissions as possible.

Sensitive Code Example

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Owner" # Sensitive
  principal_id         = data.azuread_user.example.id
}

Compliant Solution

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Azure Maps Data Reader"
  principal_id         = data.azuread_user.example.id
}

See

terraform:S6382

Disabling certificate-based authentication can reduce an organization’s ability to react against attacks on its critical functions and data.

Azure offers various authentication options to access resources: Anonymous connections, Basic authentication, password-based authentication, and certificate-based authentication.

Choosing certificate-based authentication helps bring client/host trust by allowing the host to verify the client and vice versa. It cannot be forged or forwarded by a man-in-the-middle eavesdropper, and the certificate’s private key is never sent over the network so it’s harder to steal than a password.

In case of a security incident, certificates help bring investigators traceability and allow security operations teams to react faster. For example, all compromised certificates could be revoked individually, or an issuing certificate could be revoked which causes all the certificates it issued to become untrusted.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be authenticated with certificates.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable certificate-based authentication.

Sensitive Code Example

For App Service:

resource "azurerm_app_service" "example" {
  client_cert_enabled = false # Sensitive
}

For Logic App Standards and Function Apps:

resource "azurerm_function_app" "example" {
  client_cert_mode = "Optional" # Sensitive
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_web" "example" {
  authentication_type = "Basic" # Sensitive
}

For API Management:

resource "azurerm_api_management" "example" {
  sku_name = "Consumption_1"
  client_certificate_mode = "Optional" # Sensitive
}

For Linux and Windows Web Apps:

resource "azurerm_linux_web_app" "example" {
  client_cert_enabled = false # Sensitive
}
resource "azurerm_linux_web_app" "exemple2" {
  client_cert_enabled = true
  client_cert_mode = "Optional" # Sensitive
}

Compliant Solution

For App Service:

resource "azurerm_app_service" "example" {
  client_cert_enabled = true
}

For Logic App Standards and Function Apps:

resource "azurerm_function_app" "example" {
  client_cert_mode = "Required"
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_web" "example" {
  authentication_type = "ClientCertificate"
}

For API Management:

resource "azurerm_api_management" "example" {
  sku_name = "Consumption_1"
  client_certificate_mode = "Required"
}

For Linux and Windows Web Apps:

resource "azurerm_linux_web_app" "exemple" {
  client_cert_enabled = true
  client_cert_mode = "Required"
}

See

terraform:S6383

Disabling Role-Based Access Control (RBAC) on Azure resources can reduce an organization’s ability to protect itself against access controls being compromised.

To be considered safe, access controls must follow the principle of least privilege and correctly segregate duties amongst users. RBAC helps enforce these practices by adapting the organization’s access control needs into explicit role-based policies: It helps keeping access controls maintainable and sustainable.

Furthermore, RBAC allows operations teams to work faster during a security incident. It helps to mitigate account theft or intrusions by quickly shutting down accesses.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be enforced through the use of Role-Based Access Control.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Enable Azure RBAC when the Azure resource supports it.
  • For Kubernetes clusters, enable Azure RBAC if Azure AD integration is supported. Otherwise, use the built-in Kubernetes RBAC.

Sensitive Code Example

For Azure Kubernetes Services:

resource "azurerm_kubernetes_cluster" "example" {
  role_based_access_control {
    enabled = false # Sensitive
  }
}

resource "azurerm_kubernetes_cluster" "example2" {
  role_based_access_control {
    enabled = true

    azure_active_directory {
      managed = true
      azure_rbac_enabled = false # Sensitive
    }
  }
}

For Key Vaults:

resource "azurerm_key_vault" "example" {
  enable_rbac_authorization = false # Sensitive
}

Compliant Solution

For Azure Kubernetes Services:

resource "azurerm_kubernetes_cluster" "example" {
  role_based_access_control {
    enabled = true
  }
}

resource "azurerm_kubernetes_cluster" "example" {
  role_based_access_control {
    enabled = true

    azure_active_directory {
      managed = true
      azure_rbac_enabled = true
    }
  }
}

For Key Vaults:

resource "azurerm_key_vault" "example" {
  enable_rbac_authorization   = true
}

See

terraform:S6385

Defining a custom role at the Subscription or Management group scope that allows all actions will give it the same capabilities as the built-in Owner role.

Why is this an issue?

In Azure, the Owner role of a Subscription or a Management group provides entities it is assigned to with the maximum level of privileges. The Owner role allows managing all resources and assigning any role to other entities.

Because it is a powerful entitlement, it should be granted to as few users as possible.

When a custom role has the same level of permissions as the Owner one, there are greater chances that high privileges are granted to too many users.

What is the potential impact?

Custom roles that provide the same level of permissions as Owner might indicate a configuration issue. Any entity assigned with it can perform any action on the Subscription or Management group, including changing roles and permissions.

If the affected role is unexpectedly assigned to users, they can compromise the affected scope. They can do so in the long term by assigning dangerous roles to other users or entities.

Depending on the scope to which the role is assignable, the exact impact of a successful exploitation may vary. It generally ranges from data compromise to the takeover of the cloud infrastructure.

Infrastructure takeover

By obtaining the right role, an attacker can gain control over part or all of the Azure infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining.

This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions.

Furthermore, corporate Azure infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data, and to cause more damage to the overall infrastructure.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers with the correct role could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Financial loss

Financial losses can occur when a malicious user is able to use a paid third-party-provided service. Each users assigned with a bad role will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use will lead to added costs with the Azure service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected environment. This might result in a partial denial of service for all legitimate users.

How to fix it

To reduce the risk of intrusion of a compromised owner, it is recommended to limit the number of subscription owners.

Code examples

Noncompliant code example

resource "azurerm_role_definition" "example" { # Sensitive
  name        = "example"
  scope       = data.azurerm_subscription.primary.id

  permissions {
    actions     = ["*"]
    not_actions = []
  }

  assignable_scopes = [
    data.azurerm_subscription.primary.id
  ]
}

Compliant solution

resource "azurerm_role_definition" "example" {
  name        = "example"
  scope       = data.azurerm_subscription.primary.id

  permissions {
    actions     = ["Microsoft.Compute/*"]
    not_actions = []
  }

  assignable_scopes = [
    data.azurerm_subscription.primary.id
  ]
}

Going the extra mile

Here is a list of recommendations that can be followed regarding good usage of roles:

  • Apply the least privilege principle by creating a custom role with as few permissions as possible.
  • As custom roles can be updated, gradually add atomic permissions when required.
  • Limit the assignable scopes of the custom role to a set of Resources or Resource Groups.
  • When necessary, use the built-in Owner role instead of a custom role granting subscription owner capabilities.
  • Limit the assignments of Owner roles to less than three people or service principals.

Resources

Documentation

Standards

terraform:S6387

Azure RBAC roles can be assigned to users, groups, or service principals. A role assignment grants permissions on a predefined set of resources called "scope".

The widest scopes a role can be assigned to are:

  • Subscription: a role assigned with this scope grants access to all resources of this Subscription.
  • Management Group: a scope assigned with this scope grants access to all resources of all the Subscriptions in this Management Group.

In case of security incidents involving a compromised identity (user, group, or service principal), limiting its role assignment to the narrowest scope possible helps separate duties and limits what resources are at risk.

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of the resources in the scope to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split the scope into multiple role assignments with a narrower scope.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the scope of the role assignment to a Resource or Resource Group.
  • Apply the least privilege principle by assigning roles granting as few permissions as possible.

Sensitive Code Example

resource "azurerm_role_assignment" "example" {
  scope                = data.azurerm_subscription.primary.id # Sensitive
  role_definition_name = "Reader"
  principal_id         = data.azuread_user.user.object_id
}

Compliant Solution

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Reader"
  principal_id         = data.azuread_user.user.object_id
}

See

terraform:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in AWS API Gateway

Code examples

These code samples illustrate how to fix this issue in both APIGateway and ApiGatewayV2.

Noncompliant code example

resource "aws_api_gateway_domain_name" "example" {
  domain_name = "api.example.com"
  security_policy = "TLS_1_0" # Noncompliant
}

The ApiGatewayV2 uses a weak TLS version by default:

resource "aws_apigatewayv2_domain_name" "example" {
  domain_name = "api.example.com"
  domain_name_configuration {} # Noncompliant
}

Compliant solution

resource "aws_api_gateway_domain_name" "example" {
  domain_name = "api.example.com"
  security_policy = "TLS_1_2"
}
resource "aws_apigatewayv2_domain_name" "example" {
  domain_name = "api.example.com"
  domain_name_configuration {
    security_policy = "TLS_1_2"
  }
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

terraform:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

resource "aws_s3_bucket_policy" "mynoncompliantpolicy" {  # Sensitive
  bucket = aws_s3_bucket.mybucket.id
  policy = jsonencode({
    Id = "mynoncompliantpolicy"
    Version = "2012-10-17"
    Statement = [{
            Effect = "Allow"
            Principal = {
                AWS = "*"
            }
            Action = [
                "s3:PutObject"
            ]
            Resource: "${aws_s3_bucket.mybucket.arn}/*"
        }
    ]
  })
}

Compliant Solution

This policy allows only the authorized users:

resource "aws_s3_bucket_policy" "mycompliantpolicy" {
  bucket = aws_s3_bucket.mybucket.id
  policy = jsonencode({
    Id = "mycompliantpolicy"
    Version = "2012-10-17"
    Statement = [{
            Effect = "Allow"
            Principal = {
                AWS = [
                    "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
                ]
            }
            Action = [
                "s3:PutObject"
            ]
            Resource = "${aws_s3_bucket.mybucket.arn}/*"
        }
    ]
  })
}

See

terraform:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_ebs_volume:

resource "aws_ebs_volume" "ebs_volume" {  # Sensitive as encryption is disabled by default
}
resource "aws_ebs_volume" "ebs_volume" {
  encrypted = false  # Sensitive
}

For aws_ebs_encryption_by_default:

resource "aws_ebs_encryption_by_default" "default_encryption" {
  enabled = false  # Sensitive
}

For aws_launch_configuration:

resource "aws_launch_configuration" "launch_configuration" {
  root_block_device {  # Sensitive as encryption is disabled by default
  }
  ebs_block_device {  # Sensitive as encryption is disabled by default
  }
}
resource "aws_launch_configuration" "launch_configuration" {
  root_block_device {
    encrypted = false  # Sensitive
  }
  ebs_block_device {
    encrypted = false  # Sensitive
  }
}

Compliant Solution

For aws_ebs_volume:

resource "aws_ebs_volume" "ebs_volume" {
  encrypted = true
}

For aws_ebs_encryption_by_default:

resource "aws_ebs_encryption_by_default" "default_encryption" {
  enabled = true  # Optional, default is "true"
}

For aws_launch_configuration:

resource "aws_launch_configuration" "launch_configuration" {
  root_block_device {
    encrypted = true
  }
  ebs_block_device {
    encrypted = true
  }
}

See

terraform:S6317

Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access.

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

For such policies, it is easy to define very broad permissions (by using wildcard "*" permissions for example.) This is especially true if it is not yet clear which permissions will be required for a specific workload or use case. However, it is important to limit the amount of permissions that are granted and the amount of resources to which these permissions are granted. Doing so ensures that there are no users or roles that have more permissions than they need.

If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities.

What is the potential impact?

AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope.

Privilege escalation

When IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities.

For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account.

How to fix it in AWS Identity and Access Management

Code examples

In this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

Noncompliant code example

resource "aws_iam_policy" "example" {
  name = "example"
  policy =<<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:UpdateFunctionCode"
            ],
            "Resource": "*"
        }
    ]
}
EOF
}

Compliant solution

The policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed.

resource "aws_iam_policy" "example" {
  name = "example"
  policy =<<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:UpdateFunctionCode"
            ],
            "Resource": "arn:aws:lambda:us-east-2:123456789012:function:my-function:1"
        }
    ]
}
EOF
}

How does this work?

Principle of least privilege

When creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else.

To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used.

Resources

Documentation

Articles & blog posts

Standards

terraform:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_sagemaker_notebook_instance:

resource "aws_sagemaker_notebook_instance" "notebook" {  # Sensitive, encryption disabled by default
}

Compliant Solution

For aws_sagemaker_notebook_instance:

resource "aws_sagemaker_notebook_instance" "notebook" {
  kms_key_id = aws_kms_key.enc_key.key_id
}

See

terraform:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_sns_topic:

resource "aws_sns_topic" "topic" {  # Sensitive, encryption disabled by default
  name = "sns-unencrypted"
}

Compliant Solution

For aws_sns_topic:

resource "aws_sns_topic" "topic" {
  name = "sns-encrypted"
  kms_master_key_id = aws_kms_key.enc_key.key_id
}

See

terraform:S6403

By default, GCP SQL instances offer encryption in transit, with support for TLS, but insecure connections are still accepted. On an unsecured network, such as a public network, the risk of traffic being intercepted is high. When the data isn’t encrypted, an attacker can intercept it and read confidential information.

When creating a GCP SQL instance, a public IP address is automatically assigned to it and connections to the SQL instance from public networks can be authorized.

TLS is automatically used when connecting to SQL instances through:

Ask Yourself Whether

Connections are not already automatically encrypted by GCP (eg: SQL Auth proxy) and

  • Connections to the SQL instance are performed on untrusted networks.
  • The data stored in the SQL instance is confidential.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt all connections to the SQL instance, whether using public or private IP addresses. However, since private networks can be considered trusted, requiring TLS in this situation is usually a lower priority task.

Sensitive Code Example

resource "google_sql_database_instance" "example" { # Sensitive: tls is not required
  name             = "noncompliant-master-instance"
  database_version = "POSTGRES_11"
  region           = "us-central1"

  settings {
    tier = "db-f1-micro"
  }
}

Compliant Solution

resource "google_sql_database_instance" "example" {
  name             = "compliant-master-instance"
  database_version = "POSTGRES_11"
  region           = "us-central1"

  settings {
    tier = "db-f1-micro"
    ip_configuration {
      require_ssl = true
      ipv4_enabled = true
    }
  }
}

See

terraform:S6404

Granting public access to GCP resources may reduce an organization’s ability to protect itself against attacks or theft of its GCP resources.
Security incidents associated with misuse of public access include disruption of critical functions, data theft, and additional costs due to resource overload.

To be as prepared as possible in the event of a security incident, authentication combined with fine-grained permissions helps maintain the principle of defense in depth and trace incidents back to the perpetrators.

GCP also provides the ability to grant access to a large group of people:

  • If public access is granted to all Google users, the impact of a data theft is the same as if public access is granted to all Internet users.
  • If access is granted to a large Google group, the impact of a data theft is limited based on the size of the group.

The only thing that changes in these cases is the ability to track user access in the event of an incident.

Ask Yourself Whether

  • This GCP resource is essential to the information system infrastructure.
  • This GCP resource is essential to mission-critical functions.
  • This GCP resource stores or processes sensitive data.
  • Compliance policies require that access to this resource be authenticated.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Explicitly set access to this resource or function as private.

Sensitive Code Example

For IAM resources:

resource "google_cloudfunctions_function_iam_binding" "example" {
  members = [
    "allUsers",              # Sensitive
    "allAuthenticatedUsers", # Sensitive
  ]
}

resource "google_cloudfunctions_function_iam_member" "example" {
  member = "allAuthenticatedUsers" # Sensitive
}

For ACL resources:

resource "google_storage_bucket_access_control" "example" {
  entity = "allUsers" # Sensitive
}

resource "google_storage_bucket_acl" "example" {
  role_entity = [
    "READER:allUsers",              # Sensitive
    "READER:allAuthenticatedUsers", # Sensitive
  ]
}

For container clusters:

resource "google_container_cluster" "example" {
  private_cluster_config {
    enable_private_nodes    = false # Sensitive
    enable_private_endpoint = false # Sensitive
  }
}

Compliant Solution

For IAM resources:

resource "google_cloudfunctions_function_iam_binding" "example" {
  members = [
    "serviceAccount:${google_service_account.example.email}",
    "group:${var.example_group}"
  ]
}

resource "google_cloudfunctions_function_iam_member" "example" {
  member = "user:${var.example_user}" # Sensitive
}

For ACL resources:

resource "google_storage_bucket_access_control" "example" {
  entity = "user-${var.example_user]"
}

resource "google_storage_bucket_acl" "example" {
  role_entity = [
    "READER:user-name@example.com",
    "READER:group-admins@example.com"
  ]
}

For container clusters:

resource "google_container_cluster" "example" {
  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = true
  }
}

See

terraform:S6245

This rule is deprecated, and will eventually be removed.

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"
}

Compliant Solution

Server-side encryption with Amazon S3-managed keys is used for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" {
  bucket = "example"

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

Server-side encryption with Amazon S3-managed keys is used for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
  bucket = aws_s3_bucket.example.bucket

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

See

terraform:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to deny all HTTP requests:

  • for all objects (*) of the bucket
  • for all principals (*)
  • for all actions (*)

Sensitive Code Example

No secure policy is attached to this bucket:

resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive
  bucket = "mynoncompliantbucketname"
}

A policy is defined but forces only HTTPs communication for some users:

resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive
  bucket = "mynoncompliantbucketname"
}

resource "aws_s3_bucket_policy" "mynoncompliantbucketpolicy" {
  bucket = "mynoncompliantbucketname"

  policy = jsonencode({
    Version = "2012-10-17"
    Id      = "mynoncompliantbucketpolicy"
    Statement = [
      {
        Sid       = "HTTPSOnly"
        Effect    = "Deny"
        Principal = [
          "arn:aws:iam::123456789123:root"
        ] # secondary location: only one principal is forced to use https
        Action    = "s3:*"
        Resource = [
          aws_s3_bucket.mynoncompliantbucketpolicy.arn,
          "${aws_s3_bucket.mynoncompliantbucketpolicy.arn}/*",
        ]
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      },
    ]
  })
}

Compliant Solution

A secure policy that denies all HTTP requests is used:

resource "aws_s3_bucket" "mycompliantbucket" {
  bucket = "mycompliantbucketname"
}

resource "aws_s3_bucket_policy" "mycompliantpolicy" {
  bucket = "mycompliantbucketname"

  policy = jsonencode({
    Version = "2012-10-17"
    Id      = "mycompliantpolicy"
    Statement = [
      {
        Sid       = "HTTPSOnly"
        Effect    = "Deny"
        Principal = "*"
        Action    = "s3:*"
        Resource = [
          aws_s3_bucket.mycompliantbucket.arn,
          "${aws_s3_bucket.mycompliantbucket.arn}/*",
        ]
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      },
    ]
  })
}

See

terraform:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

For AWS:

resource "aws_instance" "example" {
  associate_public_ip_address = true # Sensitive
}
resource "aws_dms_replication_instance" "example" {
  publicly_accessible = true # Sensitive
}

For Azure:

resource "azurerm_postgresql_server" "example"  {
  public_network_access_enabled = true # Sensitive
}
resource "azurerm_postgresql_server" "example"  {
  public_network_access_enabled = true # Sensitive
}
resource "azurerm_kubernetes_cluster" "production" {
  api_server_authorized_ip_ranges = ["176.0.0.0/4"] # Sensitive
  default_node_pool {
    enable_node_public_ip = true # Sensitive
  }
}

For GCP:

resource "google_compute_instance" "example" {
  network_interface {
    network = "default"

    access_config {  # Sensitive
      # Ephemeral public IP
    }
  }

Compliant Solution

For AWS:

resource "aws_instance" "example" {
  associate_public_ip_address = false
}
resource "aws_dms_replication_instance" "example" {
  publicly_accessible          = false
}

For Azure:

resource "azurerm_postgresql_server" "example"  {
  public_network_access_enabled = false
}
resource "azurerm_kubernetes_cluster" "production" {
  api_server_authorized_ip_ranges = ["192.168.0.0/16"]
  default_node_pool {
    enable_node_public_ip = false
  }
}

For GCP:

resource "google_compute_instance" "example" {
  network_interface {
    network = google_compute_network.vpc_network_example.name
  }
}

Note that setting network="default" in the network interface block leads to other security problems such as removal of logging, Cloud VPN/VPC network peering, and the addition of insecure firewall rules.
A safer alternative includes creating a specific VPC or subnetwork and enforce security measures.

See

terraform:S6400

Granting highly privileged resource rights to users or groups can reduce an organization’s ability to protect against account or service theft. It prevents proper segregation of duties and creates potentially critical attack vectors on affected resources.

If elevated access rights are abused or compromised, both the data that the affected resources work with and their access tracking are at risk.

Ask Yourself Whether

  • This GCP resource is essential to the information system infrastructure.
  • This GCP resource is essential to mission-critical functions.
  • Compliance policies require that administrative privileges for this resource be limited to a small group of individuals.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Grant IAM policies or members a less permissive role: In most cases, granting them read-only privileges is sufficient.

Separate tasks by creating multiple roles that do not use a full access role for day-to-day work.

If the predefined GCP roles do not include the specific permissions you need, create custom IAM roles.

Sensitive Code Example

For an IAM policy setup:

data "google_iam_policy" "admin" {
  binding {
    role = "roles/run.admin" # Sensitive
    members = [
      "user:name@example.com",
    ]
  }
}

resource "google_cloud_run_service_iam_policy" "policy" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  policy_data = data.google_iam_policy.admin.policy_data
}

For an IAM policy binding:

resource "google_cloud_run_service_iam_binding" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/run.admin" # Sensitive
  members = [
    "user:name@example.com",
  ]
}

For adding a member to a policy:

resource "google_cloud_run_service_iam_member" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/run.admin" # Sensitive
  member = "user:name@example.com"
}

Compliant Solution

For an IAM policy setup:

data "google_iam_policy" "admin" {
  binding {
    role = "roles/viewer"
    members = [
      "user:name@example.com",
    ]
  }
}

resource "google_cloud_run_service_iam_policy" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  policy_data = data.google_iam_policy.admin.policy_data
}

For an IAM policy binding:

resource "google_cloud_run_service_iam_binding" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/viewer"
  members = [
    "user:name@example.com",
  ]
}

For adding a member to a policy:

resource "google_cloud_run_service_iam_member" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/viewer"
  member = "user:name@example.com"
}

See

terraform:S6405

SSH keys stored and managed in a project’s metadata can be used to access GCP VM instances. By default, GCP automatically deploys project-level SSH keys to VM instances.

Project-level SSH keys can lead to unauthorized access because:

  • Their use prevents fine-grained VM-level access control and makes it difficult to follow the principle of least privilege.
  • Unlike managed access control with OS Login, manual cryptographic key management is error-prone and requires careful attention. For example, if a user leaves a project, their SSH keys should be removed from the metadata to prevent unwanted access.
  • If a project-level SSH key is compromised, all VM instances may be compromised.

Ask Yourself Whether

  • VM instances in a project have different security requirements.
  • Many users with different profiles need access to the VM instances in that project.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Block project-level SSH keys by setting the metadata.block-project-ssh-keys argument to true
  • Use OSLogin to benefit from managed access control.

Sensitive Code Example

resource "google_compute_instance" "example" { # Sensitive, because metadata.block-project-ssh-keys is not set to true
  name         = "example"
  machine_type = "e2-micro"
  zone         = "us-central1-a"

  network_interface {
    network = "default"

    access_config {
    }
  }
}

Compliant Solution

resource "google_compute_instance" "example" {
  name         = "example"
  machine_type = "e2-micro"
  zone         = "us-central1-a"

  metadata = {
    block-project-ssh-keys = true
  }

  network_interface {
    network = "default"

    access_config {
    }
  }
}

See

terraform:S6406

Excessive granting of GCP IAM permissions can allow attackers to exploit an organization’s cloud resources with malicious intent.

To prevent improper creation or deletion of resources after an account is compromised, proactive measures include both following GCP Security Insights and ensuring custom roles contain as few privileges as possible.

After gaining a foothold in the target infrastructure, sophisticated attacks typically consist of two major parts.
First, attackers must deploy new resources to carry out their malicious intent. To guard against this, operations teams must control what unexpectedly appears in the infrastructure, such as what is:

  • added
  • written down
  • updated
  • started
  • appended
  • applied
  • accessed.

Once the malicious intent is executed, attackers must avoid detection at all costs.
To counter attackers' attempts to remove their fingerprints, operations teams must control what unexpectedly disappears from the infrastructure, such as what is:

  • stopped
  • disabled
  • canceled
  • deleted
  • destroyed
  • detached
  • disconnected
  • suspended
  • rejected
  • removed.

For operations teams to be resilient in this scenario, their organization must apply both:

  • Detection security: log these actions to better detect malicious actions.
  • Preventive security: review and limit granted permissions.

This rule raises an issue when a custom role grants a number of sensitive permissions (read-write or destructive permission) that is greater than a given parameter.

Ask Yourself Whether

  • This custom role will be mostly used for read-only purposes.
  • Compliance policies require read-only access.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To reduce the risks associated with this role after a compromise:

  • Reduce the list of permissions to grant only those that are actually needed.
  • Favor read-only over read-write.

Sensitive Code Example

This custom role grants more than 5 sensitive permissions:

resource "google_project_iam_custom_role" "example" {
  permissions = [ # Sensitive
    "resourcemanager.projects.create", # Sensitive permission
    "resourcemanager.projects.delete", # Sensitive permission
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create", # Sensitive permission
    "run.services.delete", # Sensitive permission
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.setIamPolicy",  # Sensitive permission
    "run.services.list",
    "run.services.update",  # Sensitive permission
  ]
}

Compliant Solution

This custom role grants less than 5 sensitive permissions:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create",
    "run.services.delete",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
    "run.services.update",
  ]
}

See

terraform:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies to be set on a S3 bucket the following settings can be configured:

  • BlockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • IgnorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • BlockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • RestrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • BlockPublicAcls to true to block new attempts to set public ACLs.
  • IgnorePublicAcls to true to block existing public ACLs.
  • BlockPublicPolicy to true to block new attempts to set public policies.
  • RestrictPublicBuckets to true to restrict existing public policies.

Sensitive Code Example

By default, when not set, the aws_s3_bucket_public_access_block is fully deactivated (nothing is blocked):

resource "aws_s3_bucket" "example" { # Sensitive: no Public Access Block defined for this bucket
  bucket = "example"
}

This aws_s3_bucket_public_access_block allows public ACL to be set:

resource "aws_s3_bucket" "example" {  # Sensitive
  bucket = "examplename"
}

resource "aws_s3_bucket_public_access_block" "example-public-access-block" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = false # should be true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

Compliant Solution

This aws_s3_bucket_public_access_block blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_public_access_block" "example-public-access-block" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

See

terraform:S6321

Why is this an issue?

Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

An ingress rule allowing all inbound SSH traffic for AWS:

resource "aws_security_group" "noncompliant" {
  name        = "allow_ssh_noncompliant"
  description = "allow_ssh_noncompliant"
  vpc_id      = aws_vpc.main.id

  ingress {
    description      = "SSH rule"
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]  # Noncompliant
  }
}

A security rule allowing all inbound SSH traffic for Azure:

resource "azurerm_network_security_rule" "noncompliant" {
  priority                    = 100
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "22"
  source_address_prefix       = "*"  # Noncompliant
  destination_address_prefix  = "*"
}

A firewall rule allowing all inbound SSH traffic for GCP:

resource "google_compute_firewall" "noncompliant" {
  network = google_compute_network.default.name

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = ["0.0.0.0/0"]  # Noncompliant
}

Compliant solution

An ingress rule allowing inbound SSH traffic from specific IP addresses for AWS:

resource "aws_security_group" "compliant" {
  name        = "allow_ssh_compliant"
  description = "allow_ssh_compliant"
  vpc_id      = aws_vpc.main.id

  ingress {
    description      = "SSH rule"
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["1.2.3.0/24"]
  }
}

A security rule allowing inbound SSH traffic from specific IP addresses for Azure:

resource "azurerm_network_security_rule" "compliant" {
  priority                    = 100
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "22"
  source_address_prefix       = "1.2.3.0"
  destination_address_prefix  = "*"
}

A firewall rule allowing inbound SSH traffic from specific IP addresses for GCP:

resource "google_compute_firewall" "compliant" {
  network = google_compute_network.default.name

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = ["10.0.0.1/32"]
}

Resources

Documentation

Standards

terraform:S6364

Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident.

Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident.

Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be backed up for a specific amount of time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident.

Sensitive Code Example

For Amazon Relational Database Service clusters and instances:

resource "aws_db_instance" "example" {
  backup_retention_period = 2 # Sensitive
}

For Azure Cosmos DB accounts:

resource "azurerm_cosmosdb_account" "example" {
  backup {
    type = "Periodic"
    retention_in_hours = 8 # Sensitive
  }
}

Compliant Solution

For Amazon Relational Database Service clusters and instances:

resource "aws_db_instance" "example" {
  backup_retention_period = 5
}

For Azure Cosmos DB accounts:

resource "azurerm_cosmosdb_account" "example" {
  backup {
    type = "Periodic"
    retention_in_hours = 300
  }
}
terraform:S6401

The likelihood of security incidents increases when cryptographic keys are used for a long time. Thus, to strengthen the data protection it’s recommended to rotate the symmetric keys created with the Google Cloud Key Management Service (KMS) automatically and periodically. Note that it’s not possible in GCP KMS to rotate asymmetric keys automatically.

Ask Yourself Whether

  • The cryptographic key is a symmetric key.
  • The application requires compliance with some security standards like PCI-DSS.

Recommended Secure Coding Practices

It’s recommended to rotate keys automatically and regularly. The shorter the key period, the less data can be decrypted by an attacker if a key is compromised. So the key rotation period usually depends on the amount of data encrypted with a key or other requirements such as compliance with security standards. In general, a period of time of 90 days can be used.

Sensitive Code Example

resource "google_kms_crypto_key" "noncompliant-key" { # Sensitive: no rotation period is defined
  name            = "example"
  key_ring        = google_kms_key_ring.keyring.id
}

Compliant Solution

resource "google_kms_crypto_key" "compliant-key" {
  name            = "example"
  key_ring        = google_kms_key_ring.keyring.id
  rotation_period = "7776000s" # 90 days
}

See

terraform:S6402

Domain Name Systems (DNS) are vulnerable by default to various types of attacks.

One of the biggest risks is DNS cache poisoning, which occurs when a DNS accepts spoofed DNS data, caches the malicious records, and potentially sends them later in response to legitimate DNS request lookups. This attack typically relies on the attacker’s MITM ability on the network and can be used to redirect users from an intended website to a malicious website.

To prevent these vulnerabilities, Domain Name System Security Extensions (DNSSEC) ensure the integrity and authenticity of DNS data by digitally signing DNS zones.

The public key of a DNS zone used to validate signatures can be trusted as DNSSEC is based on the following chain of trust:

  • The parent DNS zone adds a "fingerprint" of the public key of the child zone in a "DS record".
  • The parent DNS zone signs it with its own private key.
  • And this process continues until the root zone.

Ask Yourself Whether

The parent DNS zone (likely managed by the DNS registrar of the domain name) supports DNSSEC and

  • The DNS zone is public (contains data such as public reachable IP addresses).

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to use DNSSEC when creating private and public DNS zones.

Private DNS zones cannot be queried on the Internet and provide DNS name resolution for private networks. The risk of MITM attacks might be considered low on these networks and therefore implementing DNSSEC is still recommended but not with a high priority.

Note: Choose a robust signing algorithm when setting up DNSSEC, such as rsasha256. The insecure rsasha1 algorithm should no longer be used.

Sensitive Code Example

resource "google_dns_managed_zone" "example" { # Sensitive: dnssec_config is missing
  name     = "foobar"
  dns_name = "foo.bar."
}

Compliant Solution

resource "google_dns_managed_zone" "example" {
  name     = "foobar"
  dns_name = "foo.bar."

  dnssec_config {
    default_key_specs {
      algorithm = "rsasha256"
    }
  }
}

See

terraform:S6407

App Engine supports encryption in transit through TLS. As soon as the app is deployed, it can be requested using appspot.com domains or custom domains. By default, endpoints accept both clear-text and encrypted traffic. When communication isn’t encrypted, there is a risk that an attacker could intercept it and read confidential information.

When creating an App Engine, request handlers can be set with different security level for encryption:

  • SECURE_NEVER: only HTTP requests are allowed (HTTPS requests are redirected to HTTP).
  • SECURE_OPTIONAL and SECURE_DEFAULT: both HTTP and HTTPS requests are allowed.
  • SECURE_ALWAYS: only HTTPS requests are allowed (HTTP requests are redirected to HTTPS).

Ask Yourself Whether

  • The handler serves confidential data in HTTP responses.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended for App Engine handlers to require TLS for all traffic. It can be achieved by setting the security level to SECURE_ALWAYS.

Sensitive Code Example

SECURE_DEFAULT, SECURE_NEVER and SECURE_OPTIONAL are sensitive TLS security level:

resource "google_app_engine_standard_app_version" "example" {
  version_id = "v1"
  service    = "default"
  runtime    = "nodejs"

  handlers {
    url_regex                   = ".*"
    redirect_http_response_code = "REDIRECT_HTTP_RESPONSE_CODE_301"
    security_level              = "SECURE_OPTIONAL" # Sensitive
    script {
      script_path = "auto"
    }
  }
}

Compliant Solution

Force the use of TLS for the handler by setting the security level on SECURE_ALWAYS:

resource "google_app_engine_standard_app_version" "example" {
  version_id = "v1"
  service    = "default"
  runtime    = "nodejs"

  handlers {
    url_regex                   = ".*"
    redirect_http_response_code = "REDIRECT_HTTP_RESPONSE_CODE_301"
    security_level              = "SECURE_ALWAYS"
    script {
      script_path = "auto"
    }
  }
}

See

terraform:S6408

Creating custom roles that allow privilege escalation can allow attackers to maliciously exploit an organization’s cloud resources.

Certain GCP permissions allow impersonation of one or more privileged principals within a GCP infrastructure.
To prevent privilege escalation after an account has been compromised, proactively follow GCP Security Insights and ensure that custom roles contain as few privileges as possible that allow direct or indirect impersonation.

For example, privileges like deploymentmanager.deployments.create allow impersonation of service accounts, even if the name does not sound like it.
Other privileges like setIamPolicy, which are more explicit, directly allow their holder to extend their privileges.

After gaining a foothold in the target infrastructure, sophisticated attackers typically map their newfound roles to understand what is exploitable.

The riskiest privileges are either:

  • At the infrastructure level: privileges to perform project, folder, or organization-wide administrative tasks.
  • At the resource level: privileges to perform resource-wide administrative tasks.

In either case, the following privileges should be avoided or granted only with caution:

  • ..setIamPolicy
  • cloudbuilds.builds.create
  • cloudfunctions.functions.create
  • cloudfunctions.functions.update
  • cloudscheduler.jobs.create
  • composer.environments.create
  • compute.instances.create
  • dataflow.jobs.create
  • dataproc.clusters.create
  • deploymentmanager.deployments.create
  • iam.roles.update
  • iam.serviceAccountKeys.create
  • iam.serviceAccounts.actAs
  • iam.serviceAccounts.getAccessToken
  • iam.serviceAccounts.getOpenIdToken
  • iam.serviceAccounts.implicitDelegation
  • iam.serviceAccounts.signBlob
  • iam.serviceAccounts.signJwt
  • orgpolicy.policy.set
  • run.services.create
  • serviceusage.apiKeys.create
  • serviceusage.apiKeys.list
  • storage.hmacKeys.create

Ask Yourself Whether

  • This role requires impersonation to perform specific tasks with different privileges.
  • This custom role is intended for a small group of administrators.

There is a risk if you answered no to these questions.

Recommended Secure Coding Practices

Use a permission that does not allow privilege escalation.

Sensitive Code Example

Lightweight custom role intended for a developer:

resource "google_organization_iam_custom_role" "example" {
  permissions = [
    "iam.serviceAccounts.getAccessToken",     # Sensitive
    "iam.serviceAccounts.getOpenIdToken",     # Sensitive
    "iam.serviceAccounts.actAs",              # Sensitive
    "iam.serviceAccounts.implicitDelegation", # Sensitive
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create",
    "run.services.delete",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
    "run.services.update",
  ]
}

Lightweight custom role intended for a read-only user:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "iam.serviceAccountKeys.create",        # Sensitive
    "iam.serviceAccountKeys.get",           # Sensitive
    "deploymentmanager.deployments.create", # Sensitive
    "cloudbuild.builds.create",             # Sensitive
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
  ]
}

Compliant Solution

Lightweight custom role intended for a developer:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create",
    "run.services.delete",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
    "run.services.update",
  ]
}

Lightweight custom role intended for a read-only user:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
  ]
}

See

terraform:S6409

Enabling Legacy Authorization, Attribute-Based Access Control (ABAC), on Google Kubernetes Engine resources can reduce an organization’s ability to protect itself against access controls being compromised.

For Kubernetes, Attribute-Based Access Control has been superseded by Role-Based Access Control. ABAC is not under active development anymore and thus should be avoided.

Ask Yourself Whether

  • This resource is essential for the information system infrastructure.
  • This resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be enforced through the use of Role-Based Access Control.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Unless you are relying on ABAC, leave it disabled.

Sensitive Code Example

For Google Kubernetes Engine:

resource "google_container_cluster" "example" {
  enable_legacy_abac = true # Sensitive
}

Compliant Solution

For Google Kubernetes Engine:

resource "google_container_cluster" "example" {
  enable_legacy_abac = false
}

See

terraform:S6414

The Google Cloud audit logs service records administrative activities and accesses to Google Cloud resources of the project. It is important to enable audit logs to be able to investigate malicious activities in the event of a security incident.

Some project members may be exempted from having their activities recorded in the Google Cloud audit log service, creating a blind spot and reducing the capacity to investigate future security events.

Ask Yourself Whether

  • The members exempted from having their activity logged have high privileges.
  • Compliance rules require that audit log should be activated for all members.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to have a consistent audit logging policy for all project members and therefore not to create logging exemptions for certain members.

Sensitive Code Example

resource "google_project_iam_audit_config" "example" {
  project = data.google_project.project.id
  service = "allServices"
  audit_log_config {
    log_type = "ADMIN_READ"
    exempted_members = [ # Sensitive
      "user:rogue.administrator@gmail.com",
    ]
  }
}

Compliant Solution

resource "google_project_iam_audit_config" "example" {
  project = data.google_project.project.id
  service = "allServices"
  audit_log_config {
    log_type = "ADMIN_READ"
  }
}

See

terraform:S6252

S3 buckets can be in three states related to versioning:

  • unversioned (default one)
  • enabled
  • suspended

When the S3 bucket is unversioned or has versioning suspended it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

Versioning is disabled by default:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"
}

Compliant Solution

Versioning is enabled for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_versioning" "example-versioning" {
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
  }
}

Versioning is enabled for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" {
  bucket = "example"

  versioning {
    enabled = true
  }
}

See

terraform:S6258

Disabling logging of this component can lead to missing traceability in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be monitored.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable the logging capabilities of this component. Depending on the component, new permissions might be required by the logging storage components.
You should consult the official documentation to enable logging for the impacted components. For example, AWS Application Load Balancer Access Logs require an additional bucket policy.

Sensitive Code Example

For Amazon S3 access requests:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"
}

For Amazon API Gateway stages:

resource "aws_api_gateway_stage" "example" { # Sensitive
  xray_tracing_enabled = false # Sensitive
}

For Amazon MSK Broker logs:

resource "aws_msk_cluster" "example" {
  cluster_name           = "example"
  kafka_version          = "2.7.1"
  number_of_broker_nodes = 3

  logging_info {
    broker_logs { # Sensitive
      firehose {
        enabled = false
      }
      s3 {
        enabled = false
      }
    }
  }
}

For Amazon MQ Brokers:

resource "aws_mq_broker" "example" {
  logs {  # Sensitive
    audit   = false
    general = false
  }
}

For Amazon Amazon DocumentDB:

resource "aws_docdb_cluster" "example" { # Sensitive
  cluster_identifier = "example"
}

For Azure App Services:

resource "azurerm_app_service" "example" {
  logs {
    application_logs {
      file_system_level = "Off" # Sensitive
      azure_blob_storage {
        level = "Off"           # Sensitive
      }
    }
  }
}

For GCP VPC Subnetwork:

resource "google_compute_subnetwork" "example" { # Sensitive
  name          = "example"
  ip_cidr_range = "10.2.0.0/16"
  region        = "us-central1"
  network       = google_compute_network.example.id
}

For GCP SQL Database Instance:

resource "google_sql_database_instance" "example" {
  name = "example"

  settings { # Sensitive
    tier = "db-f1-micro"
    ip_configuration {
      require_ssl  = true
      ipv4_enabled = true
    }
  }
}

For GCP Kubernetes Engine (GKE) cluster:

resource "google_container_cluster" "example" {
  name               = "example"
  logging_service    = "none" # Sensitive
}

Compliant Solution

For Amazon S3 access requests:

resource "aws_s3_bucket" "example-logs" {
  bucket = "example_logstorage"
  acl    = "log-delivery-write"
}

resource "aws_s3_bucket" "example" {
  bucket = "example"

  logging { # AWS provider <= 3
      target_bucket = aws_s3_bucket.example-logs.id
      target_prefix = "log/example"
  }
}

resource "aws_s3_bucket_logging" "example" { # AWS provider >= 4
  bucket = aws_s3_bucket.example.id

  target_bucket = aws_s3_bucket.example-logs.id
  target_prefix = "log/example"
}

For Amazon API Gateway stages:

resource "aws_api_gateway_stage" "example" {
  xray_tracing_enabled = true

  access_log_settings {
    destination_arn = "arn:aws:logs:eu-west-1:123456789:example"
    format = "..."
  }
}

For Amazon MSK Broker logs:

resource "aws_msk_cluster" "example" {
  cluster_name           = "example"
  kafka_version          = "2.7.1"
  number_of_broker_nodes = 3

  logging_info {
    broker_logs {
      firehose   {
        enabled = false
      }
      s3 {
        enabled = true
        bucket  = "example"
        prefix  = "log/msk-"
      }
    }
  }
}

For Amazon MQ Brokers, enable audit or general:

resource "aws_mq_broker" "example" {
  logs {
    audit   = true
    general = true
  }
}

For Amazon Amazon DocumentDB:

resource "aws_docdb_cluster" "example" {
  cluster_identifier              = "example"
  enabled_cloudwatch_logs_exports = ["audit"]
}

For Azure App Services:

resource "azurerm_app_service" "example" {
 logs {
    http_logs {
      file_system {
        retention_in_days = 90
        retention_in_mb   = 100
      }
    }

 application_logs {
      file_system_level = "Error"
      azure_blob_storage {
        retention_in_days = 90
        level             = "Error"
      }
    }
  }
}

For GCP VPC Subnetwork:

resource "google_compute_subnetwork" "example" {
  name          = "example"
  ip_cidr_range = "10.2.0.0/16"
  region        = "us-central1"
  network       = google_compute_network.example.id

  log_config {
    aggregation_interval = "INTERVAL_10_MIN"
    flow_sampling        = 0.5
    metadata             = "INCLUDE_ALL_METADATA"
  }
}

For GCP SQL Database Instance:

resource "google_sql_database_instance" "example" {
  name             = "example"

  settings {
    ip_configuration {
      require_ssl  = true
      ipv4_enabled = true
    }
    database_flags {
      name  = "log_connections"
      value = "on"
    }
    database_flags {
      name  = "log_disconnections"
      value = "on"
    }
    database_flags {
      name  = "log_checkpoints"
      value = "on"
    }
    database_flags {
      name  = "log_lock_waits"
      value = "on"
    }
  }
}

For GCP Kubernetes Engine (GKE) cluster:

resource "google_container_cluster" "example" {
  name               = "example"
  logging_service    = "logging.googleapis.com/kubernetes"
}

See

terraform:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_sqs_queue:

resource "aws_sqs_queue" "queue" {  # Sensitive, encryption disabled by default
  name = "sqs-unencrypted"
}

Compliant Solution

For aws_sqs_queue:

resource "aws_sqs_queue" "queue" {
  name = "sqs-encrypted"
  kms_master_key_id = aws_kms_key.enc_key.key_id
}

See

terraform:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

A public API that doesn’t have access control implemented:

resource "aws_api_gateway_method" "noncompliantapi" {
  authorization = "NONE" # Sensitive
  http_method   = "GET"
}

Compliant Solution

An API that implements AWS IAM permissions:

resource "aws_api_gateway_method" "compliantapi" {
  authorization = "AWS_IAM"
  http_method   = "GET"
}

See

terraform:S6378

Disabling Managed Identities can reduce an organization’s ability to protect itself against configuration faults and credential leaks.

Authenticating via managed identities to an Azure resource solely relies on an API call with a non-secret token. The process is inner to Azure: secrets used by Azure are not even accessible to end-users.

In typical scenarios without managed identities, the use of credentials can lead to mistakenly leaving them in code bases. In addition, configuration faults may also happen when storing these values or assigning them permissions.

By transparently taking care of the Azure Active Directory authentication, Managed Identities allow getting rid of day-to-day credentials management.

Ask Yourself Whether

The resource:

  • Needs to authenticate to Azure resources that support Azure Active Directory (AAD).
  • Uses a different Access Control system that doesn’t guarantee the same security controls as AAD, or no Access Control system at all.

There is a risk if you answered yes to all of those questions.

Recommended Secure Coding Practices

Enable the Managed Identities capabilities of this Azure resource. If supported, use a System-Assigned managed identity, as:

  • It cannot be shared across resources.
  • Its life cycle is deeply tied to the life cycle of its Azure resource.
  • It provides a unique independent identity.

Alternatively, User-Assigned Managed Identities can also be used but don’t guarantee the properties listed above.

Sensitive Code Example

For Typical identity blocks:

resource "azurerm_api_management" "example" { # Sensitive, the identity block is missing
  name           = "example"
  publisher_name = "company"
}

For connections between Kusto Clusters and Azure Data Factory:

resource "azurerm_data_factory_linked_service_kusto" "example" {
  name                 = "example"
  use_managed_identity = false # Sensitive
}

Compliant Solution

For Typical identity blocks:

resource "azurerm_api_management" "example" {
  name           = "example"
  publisher_name = "company"

  identity {
    type = "SystemAssigned"
  }
}

For connections between Kusto Clusters and Azure Data Factory:

resource "azurerm_data_factory_linked_service_kusto" "example" {
  name                 = "example"
  use_managed_identity = true
}

See

terraform:S6379

Enabling Azure resource-specific admin accounts can reduce an organization’s ability to protect itself against account or service account thefts.

Full Administrator permissions fail to correctly separate duties and create potentially critical attack vectors on the impacted resources.

In case of abuse of elevated permissions, both the data on which impacted resources operate and their access traceability are at risk.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require this resource to disable its administrative accounts or permissions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Disable the administrative accounts or permissions in this Azure resource.

Sensitive Code Example

For Azure Batch Pools:

resource "azurerm_batch_pool" "example" {
  name = "sensitive"

  start_task {
    user_identity {
      auto_user {
        elevation_level = "Admin" # Sensitive
        scope = "Task"
      }
    }
  }
}

For Azure Container Registries:

resource "azurerm_container_registry" "example" {
  name = "example"
  admin_enabled = true # Sensitive
}

Compliant Solution

For Azure Batch Pools:

resource "azurerm_batch_pool" "example" {
  name = "example"

  start_task {
    user_identity {
      auto_user {
        elevation_level = "NonAdmin"
        scope = "Task"
      }
    }
  }
}

For Azure Container Registries:

resource "azurerm_container_registry" "exemple" {
  name = "example"
  admin_enabled = false
}

See

terraform:S6410

The TLS configuration of Google Cloud load balancers is defined through SSL policies.

Why is this an issue?

There are three managed profiles to choose from: COMPATIBLE (default), MODERN and RESTRICTED:

  • The RESTRICTED profile supports a reduced set of cryptographic algorithms, intended to meet stricter compliance requirements.
  • The MODERN profile supports a wider set of cryptographic algorithms, allowing most modern clients to negotiate TLS.
  • The COMPATIBLE profile supports the widest set of cryptographic algorithms, allowing connections from older client applications.

The MODERN and COMPATIBLE profiles allow the use of older cryptographic algorithms that are no longer considered secure and are susceptible to attack.

What is the potential impact?

An attacker may be able to force the use of the insecure cryptographic algorithms, downgrading the security of the connection. This allows them to compromise the confidentiality or integrity of the data being transmitted.

The MODERN profile allows the use of the insecure SHA-1 signing algorithm. An attacker is able to generate forged data that passes a signature check, appearing to be legitimate data.

The COMPATIBLE profile additionally allows the user of key exchange algorithms that do not support forward secrecy as a feature. If the server’s private key is leaked, it can be used to decrypt all network traffic sent to and from that server.

How to fix it

Code examples

Noncompliant code example

resource "google_compute_ssl_policy" "example" {
  name            = "example"
  min_tls_version = "TLS_1_2"
  profile         = "COMPATIBLE" # Noncompliant
}

Compliant solution

resource "google_compute_ssl_policy" "example" {
  name            = "example"
  min_tls_version = "TLS_1_2"
  profile         = "RESTRICTED"
}

How does this work?

If an attacker is able to intercept and modify network traffic, they can filter the list of algorithms sent between the client and the server. By removing all secure algorithms from the list, the attacker can force the use of any insecure algorithms that remain.

The RESTRICTED profile only allows strong cryptographic algorithms to be used. There are no insecure algorithms that can compromise the security of the connection.

Pitfalls

Older client applications may not support the algorithms required by the RESTRICTED profile. These applications will no longer be able to connect.

If the MODERN or COMPATIBLE profiles must be used so that older clients can connect, consider using additional measures such as TLS client certificates or IP allow-lists to improve security.

Resources

Standards

External coding guidelines

terraform:S6412

When object versioning for Google Cloud Storage (GCS) buckets is enabled, different versions of an object are stored in the bucket, preventing accidental deletion. A specific version can always be deleted when the generation number of an object version is specified in the request.

Object versioning cannot be enabled on a bucket with a retention policy. A retention policy ensures that an object is retained for a specific period of time even if a request is made to delete or replace it. Thus, a retention policy locks the single current version of an object in the bucket, which differs from object versioning where different versions of an object are retained.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to enable GCS bucket versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

Versioning is disabled by default:

resource "google_storage_bucket" "example" { # Sensitive
  name          = "example"
  location      = "US"
}

Compliant Solution

Versioning is enabled:

resource "google_storage_bucket" "example" {
  name          = "example"
  location      = "US"

  versioning {
    enabled = "true"
  }
}

See

terraform:S6413

Defining a short log retention duration can reduce an organization’s ability to backtrace the actions of malicious actors in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require traceability for a longer duration.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the log retention period to an amount of time sufficient enough to be able to investigate and restore service in case of an incident.

Sensitive Code Example

For AWS Cloudwatch Logs:

resource "aws_cloudwatch_log_group" "example" {
  name = "example"
  retention_in_days = 3 # Sensitive
}

For Azure Firewall Policy:

resource "azurerm_firewall_policy" "example" {
  insights {
    enabled = true
    retention_in_days = 7 # Sensitive
  }
}

For Google Cloud Logging buckets:

resource "google_logging_project_bucket_config" "example" {
    project = var.project
    location = "global"
    retention_days = 7 # Sensitive
    bucket_id = "_Default"
}

Compliant Solution

For AWS Cloudwatch Logs:

resource "aws_cloudwatch_log_group" "example" {
  name = "example"
  retention_in_days = 30
}

For Azure Firewall Policy:

resource "azurerm_firewall_policy" "example" {
  insights {
    enabled = true
    retention_in_days = 30
  }
}

For Google Cloud Logging buckets:

resource "google_logging_project_bucket_config" "example" {
    project = var.project
    location = "global"
    retention_days = 30
    bucket_id = "_Default"
}
terraform:S6255

When S3 buckets versioning is enabled it’s possible to add an additional authentication factor before being allowed to delete versions of an object or changing the versioning state of a bucket. It prevents accidental object deletion by forcing the user sending the delete request to prove that he has a valid MFA device and a corresponding valid token.

Ask Yourself Whether

  • The S3 bucket stores sensitive information that is required to be preserved on the long term.
  • The S3 bucket grants delete permission to many users.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 MFA delete, note that:

  • MFA delete can only be enabled with the AWS CLI or API and with the root account.
  • To delete an object version, the API should be used with the x-amz-mfa header.
  • The API request, with the x-amz-mfa header, can only be used in HTTPS.

Sensitive Code Example

A versioned S3 bucket does not have MFA delete enabled for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"

  versioning {
    enabled = true
  }
}

A versioned S3 bucket does not have MFA delete enabled for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_versioning" "example" { # Sensitive
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
  }
}

Compliant Solution

MFA delete is enabled for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" {
  bucket = "example"

  versioning {
    enabled = true
    mfa_delete = true
  }
}

MFA delete is enabled for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_versioning" "example" {
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
    mfa_delete = "Enabled"
  }
  mfa = "${var.MFA}"
}

See

terraform:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_efs_file_system:

resource "aws_efs_file_system" "fs" {  # Sensitive, encryption disabled by default
}

Compliant Solution

For aws_efs_file_system:

resource "aws_efs_file_system" "fs" {
  encrypted = true
}

See

terraform:S6375

Azure Active Directory offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.

An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.

This rule raises an issue when one of the following roles is assigned:

  • Application Administrator
  • Authentication Administrator
  • Cloud Application Administrator
  • Global Administrator
  • Groups Administrator
  • Helpdesk Administrator
  • Password Administrator
  • Privileged Authentication Administrator
  • Privileged Role Administrator
  • User Administrator

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of this extensive set of permissions to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split permissions between multiple users, but it’s not enforced.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the assignment of Global Administrator roles to less than five people or service principals.
  • Apply the least privilege principle by choosing a role with a limited set of permissions.
  • If no built-in role meets your needs, create a custom role with as few permissions as possible.

Sensitive Code Example

resource "azuread_directory_role" "example" {
  display_name = "Privileged Role Administrator" # Sensitive
}

resource "azuread_directory_role_member" "example" {
  role_object_id   = azuread_directory_role.example.object_id
  member_object_id = data.azuread_user.example.object_id
}

Compliant Solution

resource "azuread_directory_role" "example" {
  display_name = "Usage Summary Reports Reader"
}

resource "azuread_directory_role_member" "example" {
  role_object_id   = azuread_directory_role.example.object_id
  member_object_id = data.azuread_user.example.object_id
}

See

php:S2115

When accessing a database, an empty password should be avoided as it introduces a weakness.

Why is this an issue?

When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials.

What is the potential impact?

Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains.

Unauthorized Access to Sensitive Data

When a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage.

Compromise of System Integrity

Without a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks.

Unwanted Modifications or Deletions

The absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences.

Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm.

How to fix it in Core PHP

Code examples

The following code uses an empty password to connect to a MySQL database.

The vulnerability can be fixed by using a strong password retrieved from an environment variable MYSQL_SECURE_PASSWORD. This environment variable is set during deployment. It should be strong and different for each database.

Noncompliant code example

$conn = new mysqli($servername, $username, ""); // Noncompliant

Compliant solution

$password = getenv('MYSQL_SECURE_PASSWORD');
$conn = new mysqli($servername, $username, $password);

Pitfalls

Hard-coded passwords

It could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:

  1. Security Vulnerability: Hard-coded passwords can be easily discovered by anyone who has access to the code, such as other developers or attackers. This can lead to unauthorized access to the database and potential data breaches.
  2. Lack of Flexibility: Hard-coded passwords make it difficult to change the password without modifying the code. If the password needs to be updated, it would require recompiling and redeploying the code, which can be time-consuming and error-prone.
  3. Version Control Issues: Storing passwords in code can lead to version control issues. If the code is shared or stored in a version control system, the password will be visible to anyone with access to the repository, which is a security risk.

To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase.

Resources

Standards

php:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

For Laravel VerifyCsrfToken middleware

use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware;

class VerifyCsrfToken extends Middleware
{
    protected $except = [
        'api/*'
    ]; // Sensitive; disable CSRF protection for a list of routes
}

For Symfony Forms

use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;

class Controller extends AbstractController {

  public function action() {
    $this->createForm('', null, [
      'csrf_protection' => false, // Sensitive; disable CSRF protection for a single form
    ]);
  }
}

Compliant Solution

For Laravel VerifyCsrfToken middleware

use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware;

class VerifyCsrfToken extends Middleware
{
    protected $except = []; // Compliant
}

Remember to add @csrf blade directive to the relevant forms when removing an element from $except. Otherwise the form submission will stop working.

For Symfony Forms

use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;

class Controller extends AbstractController {

  public function action() {
    $this->createForm('', null, []); // Compliant; CSRF protection is enabled by default
  }
}

See

php:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

CakePHP 1.x, 2.x:

Configure::write('debug', 1); // Sensitive: development mode
or
Configure::write('debug', 2); // Sensitive: development mode
or
Configure::write('debug', 3); // Sensitive: development mode

CakePHP 3.0:

use Cake\Core\Configure;

Configure::config('debug', true); // Sensitive: development mode

WordPress:

define( 'WP_DEBUG', true ); // Sensitive: development mode

Compliant Solution

CakePHP 1.2:

Configure::write('debug', 0); // Compliant; this is the production mode

CakePHP 3.0:

use Cake\Core\Configure;

Configure::config('debug', false); // Compliant:  "0" or "false" for CakePHP 3.x is suitable (production mode) to not leak sensitive data on the logs.

WordPress:

define( 'WP_DEBUG', false ); // Compliant

See

php:S4508

This rule is deprecated, and will eventually be removed.

Deserializing objects is security-sensitive. For example, it has led in the past to the following vulnerabilities:

Object deserialization from an untrusted source can lead to unexpected code execution. Deserialization takes a stream of bits and turns it into an object. If the stream contains the type of object you expect, all is well. But if you’re deserializing data coming from untrusted input, and an attacker has inserted some other type of object, you’re in trouble. Why? A known attack scenario involves the creation of a serialized PHP object with crafted attributes which will modify your application’s behavior. This attack relies on PHP magic methods like __desctruct, __wakeup or __string. The attacker doesn’t necessarily need the source code of the targeted application to exploit the vulnerability, he can also rely on the presence of open-source component and use tools to craft malicious payloads.

Ask Yourself Whether

  • an attacker could have tampered with the source provided to the deserialization function
  • you are using an unsafe deserialization function

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To prevent insecure deserialization, it is recommended to:

  • Use safe libraries that do not allow code execution at deserialization.
  • Not communicate with the outside world using serialized objects
  • Limit access to the serialized source
    • if it is a file, restrict the access to it.
    • if it comes from the network, restrict who has access to the process, such as with a Firewall or by authenticating the sender first.

See

php:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For ZipArchive module:

$zip = new ZipArchive();
if ($zip->open($file) === true) {
    $zip->extractTo('.'); // Sensitive
    $zip->close();
}

For Zip module:

$zip = zip_open($file);
while ($file = zip_read($zip)) {
    $filename = zip_entry_name($file);
    $size = zip_entry_filesize($file);

    if (substr($filename, -1) !== '/') {
        $content = zip_entry_read($file, zip_entry_filesize($file)); // Sensitive - zip_entry_read() uses zip_entry_filesize()
        file_put_contents($filename, $content);
    } else {
        mkdir($filename);
    }
}
zip_close($zip);

Compliant Solution

For ZipArchive module:

define('MAX_FILES', 10000);
define('MAX_SIZE', 1000000000); // 1 GB
define('MAX_RATIO', 10);
define('READ_LENGTH', 1024);

$fileCount = 0;
$totalSize = 0;

$zip = new ZipArchive();
if ($zip->open($file) === true) {
    for ($i = 0; $i < $zip->numFiles; $i++) {
        $filename = $zip->getNameIndex($i);
        $stats = $zip->statIndex($i);

        if (strpos($filename, '../') !== false || substr($filename, 0, 1) === '/') {
            throw new Exception();
        }

        if (substr($filename, -1) !== '/') {
            $fileCount++;
            if ($fileCount > MAX_FILES) {
                // Reached max. number of files
                throw new Exception();
            }

            $fp = $zip->getStream($filename); // Compliant
            $currentSize = 0;
            while (!feof($fp)) {
                $currentSize += READ_LENGTH;
                $totalSize += READ_LENGTH;

                if ($totalSize > MAX_SIZE) {
                    // Reached max. size
                    throw new Exception();
                }

                // Additional protection: check compression ratio
                if ($stats['comp_size'] > 0) {
                    $ratio = $currentSize / $stats['comp_size'];
                    if ($ratio > MAX_RATIO) {
                        // Reached max. compression ratio
                        throw new Exception();
                    }
                }

                file_put_contents($filename, fread($fp, READ_LENGTH), FILE_APPEND);
            }

            fclose($fp);
        } else {
            mkdir($filename);
        }
    }
    $zip->close();
}

For Zip module:

define('MAX_FILES', 10000);
define('MAX_SIZE', 1000000000); // 1 GB
define('MAX_RATIO', 10);
define('READ_LENGTH', 1024);

$fileCount = 0;
$totalSize = 0;

$zip = zip_open($file);
while ($file = zip_read($zip)) {
    $filename = zip_entry_name($file);

    if (strpos($filename, '../') !== false || substr($filename, 0, 1) === '/') {
        throw new Exception();
    }

    if (substr($filename, -1) !== '/') {
        $fileCount++;
        if ($fileCount > MAX_FILES) {
            // Reached max. number of files
            throw new Exception();
        }

        $currentSize = 0;
        while ($data = zip_entry_read($file, READ_LENGTH)) { // Compliant
            $currentSize += READ_LENGTH;
            $totalSize += READ_LENGTH;

            if ($totalSize > MAX_SIZE) {
                // Reached max. size
                throw new Exception();
            }

            // Additional protection: check compression ratio
            if (zip_entry_compressedsize($file) > 0) {
                $ratio = $currentSize / zip_entry_compressedsize($file);
                if ($ratio > MAX_RATIO) {
                    // Reached max. compression ratio
                    throw new Exception();
                }
            }

            file_put_contents($filename, $data, FILE_APPEND);
        }
    } else {
        mkdir($filename);
    }
}
zip_close($zip);

See

php:S2277

This rule is deprecated; use S5542 instead.

Why is this an issue?

Without OAEP in RSA encryption, it takes less work for an attacker to decrypt the data or infer patterns from the ciphertext. This rule logs an issue when openssl_public_encrypt is used with one the following padding constants: OPENSSL_NO_PADDING or OPENSSL_PKCS1_PADDING or OPENSSL_SSLV23_PADDING.

Noncompliant code example

function encrypt($data, $key) {
  $crypted='';
  openssl_public_encrypt($data, $crypted, $key, OPENSSL_NO_PADDING); // Noncompliant
  return $crypted;
}

Compliant solution

function encrypt($data, $key) {
  $crypted='';
  openssl_public_encrypt($data, $crypted, $key, OPENSSL_PKCS1_OAEP_PADDING);
  return $crypted;
}

Resources

php:S2278

This rule is deprecated; use S5547 instead.

Why is this an issue?

According to the US National Institute of Standards and Technology (NIST), the Data Encryption Standard (DES) is no longer considered secure:

Adopted in 1977 for federal agencies to use in protecting sensitive, unclassified information, the DES is being withdrawn because it no longer provides the security that is needed to protect federal government information.

Federal agencies are encouraged to use the Advanced Encryption Standard, a faster and stronger algorithm approved as FIPS 197 in 2001.

For similar reasons, RC2 should also be avoided.

Noncompliant code example

<?php
  $ciphertext = mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, $mode); // Noncompliant
  // ...
  $ciphertext = mcrypt_encrypt(MCRYPT_DES_COMPAT, $key, $plaintext, $mode); // Noncompliant
  // ...
  $ciphertext = mcrypt_encrypt(MCRYPT_TRIPLEDES, $key, $plaintext, $mode); // Noncompliant
  // ...
  $ciphertext = mcrypt_encrypt(MCRYPT_3DES, $key, $plaintext, $mode); // Noncompliant

  $cipher = "des-ede3-cfb";  // Noncompliant
  $ciphertext_raw = openssl_encrypt($plaintext, $cipher, $key, $options=OPENSSL_RAW_DATA, $iv);
?>

Compliant solution

<?php
  $ciphertext = mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $plaintext, MCRYPT_MODE_CBC, $iv);
?>

Resources

php:S3336

PHP session tokens are normally transmitted through HTTP cookies. However, for clients that do not support cookies and when the PHP session.use_trans_sid setting is enabled, those tokens can be transmitted as URL parameters.

Why is this an issue?

GET URL parameter can be disclosed in a variety of ways:

  • Directly in a web browser address bar.
  • In navigation history.
  • In web servers or intermediate proxies log files.

What is the potential impact?

Attackers with access to any of those disclosure locations will be able to see and steal a victim’s session token. They can then use it to log in as the user, impersonate their account, and take advantage of their privileges.

Such an attack can be more or less severe depending on the victim’s privileges. Common security impacts range from data theft to application takeover.

Data theft

Attackers with access to a compromised account will be able to disclose any information stored on it. This includes the Personally Identifiable Information (PII) of the user.

The confidentiality of PII is a requirement from national security regulatory authorities in most countries. Insufficiently protecting this data could have legal consequences and lead to fines or other prosecutions.

Application takeover

Attackers compromise the account of a high-privileged user could modify internal web application logic, disrupt workflows, or change other application’s settings in a way that will give them full control over it.

Such an attack would lead to reputational damages and financial and legal consequences.

How to fix it

Code examples

Noncompliant code example

; php.ini
session.use_trans_sid=1  ; Noncompliant

Compliant solution

; php.ini
session.use_trans_sid=0

How does this work?

The compliant code example disables the session.use_trans_sid setting.

Note that this parameter is off by default.

Resources

Standards

php:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext.

Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution.

For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Mcrypt

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, "ecb"); // Noncompliant

Compliant solution

Mcrypt is deprecated and should not be used. You can use Sodium instead.

For the AES symmetric cipher, use the GCM mode:

sodium_crypto_aead_aes256gcm_encrypt($plaintext, '', $nonce, $key);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: use authenticated encryption modes

The best-known authenticated encryption mode for AES is Galois/Counter mode (GCM).

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

php:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Mcrypt

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, $mode); // Noncompliant

Compliant solution

Mcrypt is deprecated and should not be used. You can use Sodium instead.

sodium_crypto_aead_aes256gcm_encrypt($plaintext, '', $nonce, $key);

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

php:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the rand() and mt_rand() functions rely on a pseudorandom number generator, it should not be used for security-critical applications or for protecting sensitive data.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use functions which rely on a cryptographically strong random number generator such as random_int() or random_bytes() or openssl_random_pseudo_bytes()
  • When using openssl_random_pseudo_bytes(), provide and check the crypto_strong parameter
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

$random = rand();
$random2 = mt_rand(0, 99);

Compliant Solution

$randomInt = random_int(0,99); // Compliant; generates a cryptographically secure random integer

See

php:S3334

File access functions in PHP are typically used to open local files. They are also capable of reading files from remote servers using protocols such as HTTP, HTTPS and FTP.

This behavior is controlled by the allow_url_fopen and allow_url_include settings.

Why is this an issue?

Most applications do not require or expect the file access functions to download remotely accessible files. However, attackers can abuse these remote file access features while exploiting other vulnerabilities, such as path traversal issues.

What is the potential impact?

While activating these settings does not pose a direct threat to the application’s security, they can make the exploitation of other vulnerabilities easier and more severe.

If an attacker can control a file location while allow_url_fopen is set to 1, they can use this ability to perform a Server-Side Request Forgery exploit. This allows the attacker to affect more than just the local application and they may be able to laterally attack other assets on the local network.

If allow_url_include is set to 1, the attacker will also have the ability to download and execute arbitrary PHP code.

How to fix it

allow_url_fopen and allow_url_include should be deactivated in the main PHP configuration file. Note that allow_url_include is disabled by default while allow_url_fopen is not and must be explicitly disabled.

Code examples

Noncompliant code example

; php.ini  Noncompliant; allow_url_fopen is enabled by default
allow_url_include=1  ; Noncompliant

Compliant solution

; php.ini
allow_url_fopen=0
allow_url_include=0

Resources

Standards

php:S3335

The cgi.force_redirect php.ini configuration controls the behavior of the PHP engine when used in CGI mode. In particular, it prevents CGI scripts from being directly requested without prior web server or application processing.

When disabled, CGI scripts can be requested directly.

Why is this an issue?

Pre-processing on the server side is often required to check users authentication when working in CGI mode. Those preliminary actions can also position diverse configuration parameters necessary for the CGI script to work correctly.

What is the potential impact?

CGI scripts might behave unexpectedly if the proper configuration is not set up before they are accessed.

Most serious security-related consequences will affect the authorization and authentication mechanisms of the application. When the web server is responsible for authenticating clients and forwarding the proper identity to the script, direct access will bypass this authentication step.

Attackers could also provide arbitrary identities to the CGI script by forging specific HTTP headers or parameters. They could then impersonate any legitimate user of the application.

How to fix it

cgi.force_redirect should be enforced in the main PHP configuration file.

Note that this parameter is enabled by default.

Code examples

Noncompliant code example

; php.ini
cgi.force_redirect=0  ; Noncompliant

Compliant solution

; php.ini
cgi.force_redirect=1  ; Noncompliant

Pitfalls

The cgi.force_redirect is not supported by all web servers. For example, Microsoft IIS web server is unable to differentiate an internally redirected request from a normal one.

While using such a server, the cgi.force_redirect parameter will have to be disabled for the CGI scripts to work properly. In that case, it is important to ensure the CGI behavior is aware of the security threat.

Resources

Standards

php:S3337

The enable_dl PHP configuration setting allows PHP extensions to be loaded dynamically at runtime.

Why is this an issue?

When dynamic loading is enabled, PHP code can load arbitrary PHP extensions by calling the dl function. This can be used to bypass restrictions set with the open_basedir configuration.

PHP defaults to allowing dynamic loading.

How to fix it

enable_dl setting should be set to 0 in the main PHP configuration.

Code examples

Noncompliant code example

; php.ini
enable_dl=1  ; Noncompliant

Compliant solution

; php.ini
enable_dl=0

Resources

Standards

php:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Core PHP

Code examples

Noncompliant code example

$opts = array(
  'ssl' => [
    'crypto_method' => STREAM_CRYPTO_METHOD_TLSv1_1_CLIENT // Noncompliant
  ],
  'http'=>array(
    'method'=>"GET"
  )
);

$context = stream_context_create($opts);

$fp = fopen('https://www.example.com', 'r', false, $context);
fpassthru($fp);
fclose($fp);

Compliant solution

$opts = array(
  'ssl' => [
    'crypto_method' => STREAM_CRYPTO_METHOD_TLSv1_2_CLIENT
  ],
  'http'=>array(
    'method'=>"GET"
  )
);

$context = stream_context_create($opts);

$fp = fopen('https://www.example.com', 'r', false, $context);
fpassthru($fp);
fclose($fp);

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

php:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Core PHP

Code examples

Noncompliant code example

Here is an example of a private key generation with RSA:

$config = [
    "digest_alg"       => "sha512",
    "private_key_bits" => 1024,                 // Noncompliant
    "private_key_type" => OPENSSL_KEYTYPE_RSA,
];

$res = openssl_pkey_new($config);

Compliant solution

$config = [
    "digest_alg"       => "sha512",
    "private_key_bits" => 2048,
    "private_key_type" => OPENSSL_KEYTYPE_RSA,
];

$res = openssl_pkey_new($config);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms is mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC-based algorithms.

Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:

  • NIST P curves with a size of at least 224 bits, e.g. secp256r1.
  • Curve25519, generally known as ed25519 or x25519 depending on its application.
  • Curve448.
  • Brainpool curves with a size of at least 224 bits, e.g. brainpoolP224r1

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

php:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

Builtin functions

function myEncrypt($cipher, $key, $data, $mode, $iv, $options, $padding, $infile, $outfile, $recipcerts, $headers, $nonce, $ad, $pub_key_ids, $env_keys)
{
    mcrypt_ecb ($cipher, $key, $data, $mode); // Sensitive
    mcrypt_cfb($cipher, $key, $data, $mode, $iv); // Sensitive
    mcrypt_cbc($cipher, $key, $data, $mode, $iv); // Sensitive
    mcrypt_encrypt($cipher, $key, $data, $mode); // Sensitive

    openssl_encrypt($data, $cipher, $key, $options, $iv); // Sensitive
    openssl_public_encrypt($data, $crypted, $key, $padding); // Sensitive
    openssl_pkcs7_encrypt($infile, $outfile, $recipcerts, $headers); // Sensitive
    openssl_seal($data, $sealed_data, $env_keys, $pub_key_ids); // Sensitive

    sodium_crypto_aead_aes256gcm_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_aead_chacha20poly1305_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_aead_chacha20poly1305_ietf_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_aead_xchacha20poly1305_ietf_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_box_seal ($data, $key); // Sensitive
    sodium_crypto_box ($data, $nonce, $key); // Sensitive
    sodium_crypto_secretbox ($data, $nonce, $key); // Sensitive
    sodium_crypto_stream_xor ($data, $nonce, $key); // Sensitive
}

CakePHP

use Cake\Utility\Security;

function myCakeEncrypt($key, $data, $engine)
{
    Security::encrypt($data, $key); // Sensitive

    // Do not use custom made engines and remember that Mcrypt is deprecated.
    Security::engine($engine); // Sensitive. Setting the encryption engine.
}

CodeIgniter

class EncryptionController extends CI_Controller
{
    public function __construct()
    {
        parent::__construct();
        $this->load->library('encryption');
    }

    public function index()
    {
        $this->encryption->create_key(16); // Sensitive. Review the key length.
        $this->encryption->initialize( // Sensitive.
            array(
                'cipher' => 'aes-256',
                'mode' => 'ctr',
                'key' => 'the key',
            )
        );
        $this->encryption->encrypt("mysecretdata"); // Sensitive.
    }
}

CraftCMS version 3

use Craft;

// This is similar to Yii as it used by CraftCMS
function craftEncrypt($data, $key, $password) {
    Craft::$app->security->encryptByKey($data, $key); // Sensitive
    Craft::$app->getSecurity()->encryptByKey($data, $key); // Sensitive
    Craft::$app->security->encryptByPassword($data, $password); // Sensitive
    Craft::$app->getSecurity()->encryptByPassword($data, $password); // Sensitive
}

Drupal 7 - Encrypt module

function drupalEncrypt() {
    $encrypted_text = encrypt('some string to encrypt'); // Sensitive
}

Joomla

use Joomla\Crypt\CipherInterface;

abstract class MyCipher implements CipherInterface // Sensitive. Implementing custom cipher class
{}

function joomlaEncrypt() {
    new Joomla\Crypt\Cipher_Sodium(); // Sensitive
    new Joomla\Crypt\Cipher_Simple(); // Sensitive
    new Joomla\Crypt\Cipher_Rijndael256(); // Sensitive
    new Joomla\Crypt\Cipher_Crypto(); // Sensitive
    new Joomla\Crypt\Cipher_Blowfish(); // Sensitive
    new Joomla\Crypt\Cipher_3DES(); // Sensitive
}
}

Laravel

use Illuminate\Support\Facades\Crypt;

function myLaravelEncrypt($data)
{
    Crypt::encryptString($data); // Sensitive
    Crypt::encrypt($data); // Sensitive
    // encrypt using the Laravel "encrypt" helper
    encrypt($data); // Sensitive
}

PHP-Encryption library

use Defuse\Crypto\Crypto;
use Defuse\Crypto\File;

function mypPhpEncryption($data, $key, $password, $inputFilename, $outputFilename, $inputHandle, $outputHandle) {
    Crypto::encrypt($data, $key); // Sensitive
    Crypto::encryptWithPassword($data, $password); // Sensitive
    File::encryptFile($inputFilename, $outputFilename, $key); // Sensitive
    File::encryptFileWithPassword($inputFilename, $outputFilename, $password); // Sensitive
    File::encryptResource($inputHandle, $outputHandle, $key); // Sensitive
    File::encryptResourceWithPassword($inputHandle, $outputHandle, $password); // Sensitive
}

PhpSecLib

function myphpseclib($mode) {
    new phpseclib\Crypt\RSA(); // Sensitive. Note: RSA can also be used for signing data.
    new phpseclib\Crypt\AES(); // Sensitive
    new phpseclib\Crypt\Rijndael(); // Sensitive
    new phpseclib\Crypt\Twofish(); // Sensitive
    new phpseclib\Crypt\Blowfish(); // Sensitive
    new phpseclib\Crypt\RC4(); // Sensitive
    new phpseclib\Crypt\RC2(); // Sensitive
    new phpseclib\Crypt\TripleDES(); // Sensitive
    new phpseclib\Crypt\DES(); // Sensitive

    new phpseclib\Crypt\AES($mode); // Sensitive
    new phpseclib\Crypt\Rijndael($mode); // Sensitive
    new phpseclib\Crypt\TripleDES($mode); // Sensitive
    new phpseclib\Crypt\DES($mode); // Sensitive
}

Sodium Compat library

function mySodiumCompatEncrypt($data, $ad, $nonce, $key) {
    ParagonIE_Sodium_Compat::crypto_aead_chacha20poly1305_ietf_encrypt($data, $ad, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_aead_xchacha20poly1305_ietf_encrypt($data, $ad, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_aead_chacha20poly1305_encrypt($data, $ad, $nonce, $key); // Sensitive

    ParagonIE_Sodium_Compat::crypto_aead_aes256gcm_encrypt($data, $ad, $nonce, $key); // Sensitive

    ParagonIE_Sodium_Compat::crypto_box($data, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_secretbox($data, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_box_seal($data, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_secretbox_xchacha20poly1305($data, $nonce, $key); // Sensitive
}

Yii version 2

use Yii;

// Similar to CraftCMS as it uses Yii
function YiiEncrypt($data, $key, $password) {
    Yii::$app->security->encryptByKey($data, $key); // Sensitive
    Yii::$app->getSecurity()->encryptByKey($data, $key); // Sensitive
    Yii::$app->security->encryptByPassword($data, $password); // Sensitive
    Yii::$app->getSecurity()->encryptByPassword($data, $password); // Sensitive
}

Zend

use Zend\Crypt\FileCipher;
use Zend\Crypt\PublicKey\DiffieHellman;
use Zend\Crypt\PublicKey\Rsa;
use Zend\Crypt\Hybrid;
use Zend\Crypt\BlockCipher;

function myZendEncrypt($key, $data, $prime, $options, $generator, $lib)
{
    new FileCipher; // Sensitive. This is used to encrypt files

    new DiffieHellman($prime, $generator, $key); // Sensitive

    $rsa = Rsa::factory([ // Sensitive
        'public_key'    => 'public_key.pub',
        'private_key'   => 'private_key.pem',
        'pass_phrase'   => 'mypassphrase',
        'binary_output' => false,
    ]);
    $rsa->encrypt($data); // No issue raised here. The configuration of the Rsa object is the line to review.

    $hybrid = new Hybrid(); // Sensitive

    BlockCipher::factory($lib, $options); // Sensitive
}

See

php:S5876

An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled.

Why is this an issue?

Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:

  • When a user visits a website or logs in, a session is created for them.
  • This session is assigned a unique session identifier, stored in a cookie, in local storage, or through URL parameters.
  • In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. For example, the attacker sends the victim an email containing a link with this predetermined session identifier.
  • When the victim clicks on the link, the web application does not create a new session identifier but uses this identifier known to the attacker.
  • At this point, the attacker can hijack and impersonate the victim’s session.

What is the potential impact?

Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following:

Impersonation

Once an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf.

Data Breach

If an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes.

Privilege Escalation

In some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems.

How to fix it in Symfony

Code examples

In a Symfony Security’s context, session fixation protection can be disabled with the value none for the session_fixation_strategy attribute.

Session fixation protection is enabled by default in Symfony. It can be explicitly enabled with the values migrate and invalidate for the session_fixation_strategy attribute.

Noncompliant code example

namespace Symfony\Component\DependencyInjection\Loader\Configurator;

return static function (ContainerConfigurator $container) {
    $container->extension('security', [
        'session_fixation_strategy' => 'none', // Noncompliant
    ]);
};

Compliant solution

namespace Symfony\Component\DependencyInjection\Loader\Configurator;

return static function (ContainerConfigurator $container) {
    $container->extension('security', [
        'session_fixation_strategy' => 'migrate',
    ]);
};

How does this work?

The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process.

Here’s how session fixation protection typically works:

  1. When a user visits a website or logs in, a session is created for them. This session is assigned a unique session identifier, which is stored in a cookie or passed through URL parameters.
  2. In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. This allows the attacker to potentially gain unauthorized access to the user’s session.
  3. To protect against session fixation attacks, session fixation protection mechanisms come into play during the authentication process. When a user successfully authenticates, this mechanism generates a new session identifier for the user’s session.
  4. The old session identifier, which may have been manipulated by the attacker, is invalidated and no longer associated with the user’s session. This ensures that any attempts by the attacker to use the fixed session identifier are rendered ineffective.
  5. The user is then assigned the new session identifier, which is used for subsequent requests and session tracking. This new session identifier is typically stored in a new session cookie or passed through URL parameters.

By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process.

Resources

Documentation

Security Configuration Reference - Session Fixation Strategy

Standards

php:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

In php.ini you can specify the flags for the session cookie which is security-sensitive:

session.cookie_httponly = 0;  // Sensitive: this sensitive session cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability

Same thing in PHP code:

session_set_cookie_params($lifetime, $path, $domain, true, false);  // Sensitive: this sensitive session cookie is created with the httponly flag (the fifth argument) set to false and so it can be stolen easily in case of XSS vulnerability

If you create a custom security-sensitive cookie in your PHP code:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true, false); // Sensitive: this sensitive cookie is created with the httponly flag (the seventh argument) set to false  and so it can be stolen easily in case of XSS vulnerability

By default setcookie and setrawcookie functions set httpOnly flag to false (the seventh argument) and so cookies can be stolen easily in case of XSS vulnerability:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true); // Sensitive: a sensitive cookie is created with the httponly flag  (the seventh argument) not defined (by default set to false)
setrawcookie($name, $value, $expire, $path, $domain, true); // Sensitive: a sensitive cookie is created with the httponly flag (the seventh argument) not defined  (by default set to false)

Compliant Solution

session.cookie_httponly = 1; // Compliant: the sensitive cookie is protected against theft thanks (cookie_httponly=1)
session_set_cookie_params($lifetime, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the fifth argument set to true (HttpOnly=true)
$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the seventh argument set to true (HttpOnly=true)
setrawcookie($name, $value, $expire, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the seventh argument set to true (HttpOnly=true)

See

php:S3332

This rule is deprecated, and will eventually be removed.

Why is this an issue?

Cookies without fixed lifetimes or expiration dates are known as non-persistent, or "session" cookies, meaning they last only as long as the browser session, and poof away when the browser closes. Cookies with expiration dates, "persistent" cookies, are stored/persisted until those dates.

Non-persistent cookies should be used for the management of logged-in sessions on web sites. To make a cookie non-persistent, simply omit the expires attribute.

This rule raises an issue when expires is set for a session cookie, either programmatically or via configuration, such as session.cookie_lifetime.

Resources

php:S3333

When accessing files on the local filesystem, PHP can enforce security checks to defend against some attacks. The open_basedir setting in the main PHP configuration defines a set of directories that the application is allowed to access. Access to locations outside of these directories will be blocked.

Why is this an issue?

The PHP runtime will allow the application to access all files underneath the configured set of directories. If no value is set, the application may access any file on the filesystem.

What is the potential impact?

open_basedir is commonly used to ensure that a PHP application can only access files needed for the application function. While deactivating this setting does not pose a direct threat to the application’s security, it can make exploitation of other vulnerabilities easier and more severe.

If an attacker can exploit a path traversal vulnerability, they will be able to access any file made available to the application’s user account. This may include system-critical or otherwise sensitive files.

In shared hosting environments, a vulnerability can affect all co-hosted applications and not only the vulnerable one. open_basedir can help limit the scope of the compromise in that case.

How to fix it

The main PHP configuration should define the open_basedir setting. This setting should not include overly large directories, such as the root directory of the filesystem.

Adding the current directory, denoted by “.”, to the open_basedir configuration is also dangerous. It is possible to change the current directory within PHP scripts by calling chdir(), effectively removing any protection.

Code examples

Noncompliant code example

; php.ini
open_basedir="/:${USER}/scripts/data"  ; Noncompliant; root directory in the list
; php.ini
; open_basedir= ; Noncompliant; setting commented out

Compliant solution

; php.ini
open_basedir="${USER}/scripts/data"
; php.ini try 1
open_basedir="/var/www/myapp/data"

Resources

Standards

php:S4784

This rule is deprecated; use S2631 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as /(a+)+s/ will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and contains at at least two instances of any of the following characters *+{.

Example: (a+)*

The following functions are detected as executing regular expressions:

Note that ereg* functions have been removed in PHP 7 and PHP 5 end of life date is the 1st of January 2019. Using PHP 5 is dangerous as there will be no security fix.

This rule’s goal is to guide security code reviews.

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not set the constant pcre.backtrack_limit to a high value as it will increase the resource consumption of PCRE functions.

Check the error codes of PCRE functions via preg_last_error.

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using. Do not run vulnerable regular expressions on user input.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Avoid executing a user input string as a regular expression or use at least preg_quote to escape regular expression characters.

Exceptions

An issue will be created for the functions mb_ereg_search_pos, mb_ereg_search_regs and mb_ereg_search if and only if at least the first argument, i.e. the $pattern, is provided.

The current implementation does not follow variables. It will only detect regular expressions hard-coded directly in the function call.

$pattern = "/(a+)+/";
$result = eregi($pattern, $input);  // No issue will be raised even if it is Sensitive

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

php:S2255

This rule is deprecated, and will eventually be removed.

Using cookies is security-sensitive. It has led in the past to the following vulnerabilities:

Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed.

This rule flags code that writes cookies.

Ask Yourself Whether

  • sensitive information is stored inside the cookie.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Cookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session.

Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed.

Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies.

As a side note, every information read from a cookie should be Sanitized.

Sensitive Code Example

$value = "1234 1234 1234 1234";

// Review this cookie as it seems to send sensitive information (credit card number).
setcookie("CreditCardNumber", $value, $expire, $path, $domain, true, true); // Sensitive
setrawcookie("CreditCardNumber", $value, $expire, $path, $domain, true, true); // Sensitive

See

php:S3331

This rule is deprecated, and will eventually be removed.

A cookie’s domain specifies which websites should be able to read it. Left blank, browsers are supposed to only send the cookie to sites that exactly match the sending domain. For example, if a cookie was set by lovely.dream.com, it should only be readable by that domain, and not by nightmare.com or even strange.dream.com. If you want to allow sub-domain access for a cookie, you can specify it by adding a dot in front of the cookie’s domain, like so: .dream.com. But cookie domains should always use at least two levels.

Cookie domains can be set either programmatically or via configuration. This rule raises an issue when any cookie domain is set with a single level, as in .com.

Ask Yourself Whether

  • the domain attribute has only one level as domain naming.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • You should check the domain attribute has been set and its value has more than one level of domain nanimg, like: sonarsource.com

Sensitive Code Example

setcookie("TestCookie", $value, time()+3600, "/~path/", ".com", 1); // Noncompliant
session_set_cookie_params(3600, "/~path/", ".com"); // Noncompliant

// inside php.ini
session.cookie_domain=".com"; // Noncompliant

Compliant Solution

setcookie("TestCookie", $value, time()+3600, "/~path/", ".myDomain.com", 1);
session_set_cookie_params(3600, "/~path/", ".myDomain.com");

// inside php.ini
session.cookie_domain=".myDomain.com";

See

php:S3338

This rule is deprecated, and will eventually be removed.

Why is this an issue?

file_uploads is an on-by-default PHP configuration that allows files to be uploaded to your site. Since accepting candy files from strangers is inherently dangerous, this feature should be disabled unless it is absolutely necessary for your site.

This rule raises an issue when file_uploads is not explicitly disabled.

Noncompliant code example

; php.ini
file_uploads=1  ; Noncompliant

Compliant solution

; php.ini
file_uploads=0

Resources

php:S4433

Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:

  • Anonymous Authentication
  • Unauthenticated Authentication
  • Name/Password Authentication

A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials.

Why is this an issue?

When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory.

What is the potential impact?

An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores.

Authentication bypass

If attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider.

In such a case, all users configured in the directory might see their identity and privileges taken over.

Sensitive information leak

If attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information.

Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider.

If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law.

How to fix it

Code examples

The following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism.

Noncompliant code example

$ldapconn = ldap_connect("ldap.example.com");

if ($ldapconn) {
    $ldapbind = ldap_bind($ldapconn); // Noncompliant
}

Compliant solution

$ldaprdn  = 'uname';
$ldappass = 'password';

$ldapconn = ldap_connect("ldap.example.com");

if ($ldapconn) {
    $ldapbind = ldap_bind($ldapconn, $ldaprdn, $ldappass); // Compliant
}

Resources

Documentation

Standards

php:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

$hash = md5($data); // Sensitive
$hash = sha1($data);   // Sensitive

Compliant Solution

// for a password
$hash = password_hash($password, PASSWORD_BCRYPT); // Compliant

// other context
$hash = hash("sha512", $data);

See

php:S4792

This rule is deprecated, and will eventually be removed.

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

Basic PHP configuration:

function configure_logging() {
  error_reporting(E_RECOVERABLE_ERROR); // Sensitive
  error_reporting(32); // Sensitive

  ini_set('docref_root', '1'); // Sensitive
  ini_set('display_errors', '1'); // Sensitive
  ini_set('display_startup_errors', '1'); // Sensitive
  ini_set('error_log', "path/to/logfile"); // Sensitive - check logfile is secure
  ini_set('error_reporting', E_PARSE ); // Sensitive
  ini_set('error_reporting', 64); // Sensitive
  ini_set('log_errors', '0'); // Sensitive
  ini_set('log_errors_max_length', '512'); // Sensitive
  ini_set('ignore_repeated_errors', '1'); // Sensitive
  ini_set('ignore_repeated_source', '1'); // Sensitive
  ini_set('track_errors', '0'); // Sensitive

  ini_alter('docref_root', '1'); // Sensitive
  ini_alter('display_errors', '1'); // Sensitive
  ini_alter('display_startup_errors', '1'); // Sensitive
  ini_alter('error_log', "path/to/logfile"); // Sensitive - check logfile is secure
  ini_alter('error_reporting', E_PARSE ); // Sensitive
  ini_alter('error_reporting', 64); // Sensitive
  ini_alter('log_errors', '0'); // Sensitive
  ini_alter('log_errors_max_length', '512'); // Sensitive
  ini_alter('ignore_repeated_errors', '1'); // Sensitive
  ini_alter('ignore_repeated_source', '1'); // Sensitive
  ini_alter('track_errors', '0'); // Sensitive
}

Definition of custom loggers with psr/log

abstract class MyLogger implements \Psr\Log\LoggerInterface { // Sensitive
    // ...
}

abstract class MyLogger2 extends \Psr\Log\AbstractLogger { // Sensitive
    // ...
}

abstract class MyLogger3 {
    use \Psr\Log\LoggerTrait; // Sensitive
    // ...
}

Exceptions

No issue will be raised for logger configuration when it follows recommended settings for production servers. The following examples are all valid:

  ini_set('docref_root', '0');
  ini_set('display_errors', '0');
  ini_set('display_startup_errors', '0');

  error_reporting(0);
  ini_set('error_reporting', 0);

  ini_set('log_errors', '1');
  ini_set('log_errors_max_length', '0');
  ini_set('ignore_repeated_errors', '0');
  ini_set('ignore_repeated_source', '0');
  ini_set('track_errors', '1');

See

php:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in cURL

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by setting CURLOPT_SSL_VERIFYHOST to 0 or false. To enable validation set the value to 2 or true or do not set CURLOPT_SSL_VERIFYHOST at all to use the secure default value.

Noncompliant code example

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0);  // Noncompliant
curl_exec($curl);
curl_close($curl);

Compliant solution

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 2);
curl_exec($curl);
curl_close($curl);

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

php:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

$password = "65DBGgwe4uazdWQA"; // Sensitive

$httpUrl = "https://example.domain?user=user&password=65DBGgwe4uazdWQA" // Sensitive
$sshUrl = "ssh://user:65DBGgwe4uazdWQA@example.domain" // Sensitive

Compliant Solution

$user = getUser();
$password = getPassword(); // Compliant

$httpUrl = "https://example.domain?user=$user&password=$password" // Compliant
$sshUrl = "ssh://$user:$password@example.domain" // Compliant

See

php:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

$url = "http://example.com"; // Sensitive
$url = "ftp://anonymous@example.com"; // Sensitive
$url = "telnet://anonymous@example.com"; // Sensitive

$con = ftp_connect('example.com'); // Sensitive

$trans = (new Swift_SmtpTransport('XXX', 1234)); // Sensitive

$mailer = new PHPMailer(true); // Sensitive

define( 'FORCE_SSL_ADMIN', false); // Sensitive
define( 'FORCE_SSL_LOGIN', false); // Sensitive

Compliant Solution

$url = "https://example.com";
$url = "sftp://anonymous@example.com";
$url = "ssh://anonymous@example.com";

$con = ftp_ssl_connect('example.com');

$trans = (new Swift_SmtpTransport('smtp.example.org', 1234))
  ->setEncryption('tls')
;

$mailer = new PHPMailer(true);
$mailer->SMTPSecure = 'tls';

define( 'FORCE_SSL_ADMIN', true);
define( 'FORCE_SSL_LOGIN', true);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

php:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

For Symfony Constraints:

use Symfony\Component\Validator\Constraints as Assert;
use Symfony\Component\Validator\Mapping\ClassMetadata;

class TestEntity
{
    public static function loadValidatorMetadata(ClassMetadata $metadata)
    {
        $metadata->addPropertyConstraint('upload', new Assert\File([
            'maxSize' => '100M', // Sensitive
        ]));
    }
}

For Laravel Validator:

use App\Http\Controllers\Controller;
use Illuminate\Http\Request;

class TestController extends Controller
{
    public function test(Request $request)
    {
        $validatedData = $request->validate([
            'upload' => 'required|file', // Sensitive
        ]);
    }
}

Compliant Solution

For Symfony Constraints:

use Symfony\Component\Validator\Constraints as Assert;
use Symfony\Component\Validator\Mapping\ClassMetadata;

class TestEntity
{
    public static function loadValidatorMetadata(ClassMetadata $metadata)
    {
        $metadata->addPropertyConstraint('upload', new Assert\File([
            'maxSize' => '8M', // Compliant
        ]));
    }
}

For Laravel Validator:

use App\Http\Controllers\Controller;
use Illuminate\Http\Request;

class TestController extends Controller
{
    public function test(Request $request)
    {
        $validatedData = $request->validate([
            'upload' => 'required|file|max:8000', // Compliant
        ]);
    }
}

See

php:S6437

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Application’s security downgrade

A downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component.

For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

The following code example is noncompliant because it uses a hardcoded secret value.

Noncompliant code example

use Defuse\Crypto\KeyOrPassword;

function createKey() {
    $password = "3xAmpl3";  // Noncompliant
    return KeyOrPassword::createFromPassword($password);
}

Compliant solution

use Defuse\Crypto\KeyOrPassword;

function createKey() {
    $password = $_ENV["SECRET"]
    return KeyOrPassword::createFromPassword($password);
}

How does this work?

While the noncompliant code example contains a hard-coded password, the compliant solution retrieves the secret’s value from its environment. This allows to have an environment-dependent secret value and avoids storing the password in the source code itself.

Depending on the application and its underlying infrastructure, how the secret gets added to the environment might change.

Resources

Documentation

Standards

php:S2070

This rule is deprecated; use S4790 instead.

Why is this an issue?

The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160.

Consider using safer alternatives, such as SHA-256, SHA-512 or SHA-3.

Noncompliant code example

$password = ...

if (md5($password) === '1f3870be274f6c49b3e31a0c6728957f') { // Noncompliant; md5() hashing algorithm is not secure for password management
   [...]
}

if (sha1($password) === 'd0be2dc421be4fcd0172e5afceea3970e2f3d940') { // Noncompliant; sha1() hashing algorithm is not secure for password management
   [...]
}

Resources

php:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

$id = $_GET['id'];
mysql_connect('localhost', $username, $password) or die('Could not connect: ' . mysql_error());
mysql_select_db('myDatabase') or die('Could not select database');

$result = mysql_query("SELECT * FROM myTable WHERE id = " . $id);  // Sensitive, could be susceptible to SQL injection

while ($row = mysql_fetch_object($result)) {
    echo $row->name;
}

Compliant Solution

$id = $_GET['id'];
try {
    $conn = new PDO('mysql:host=localhost;dbname=myDatabase', $username, $password);
    $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

    $stmt = $conn->prepare('SELECT * FROM myTable WHERE id = :id');
    $stmt->execute(array('id' => $id));

    while($row = $stmt->fetch(PDO::FETCH_OBJ)) {
        echo $row->name;
    }
} catch(PDOException $e) {
    echo 'ERROR: ' . $e->getMessage();
}

Exceptions

No issue will be raised if one of the functions is called with hard-coded string (no concatenation) and this string does not contain a "$" sign.

$result = mysql_query("SELECT * FROM myTable WHERE id = 42") or die('Query failed: ' . mysql_error());  // Compliant

The current implementation does not follow variables. It will only detect SQL queries which are concatenated or contain a $ sign directly in the function call.

$query = "SELECT * FROM myTable WHERE id = " . $id;
$result = mysql_query($query);  // No issue will be raised even if it is Sensitive

See

php:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in Core PHP

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

$xml = file_get_contents('xxe.xml');
$doc = simplexml_load_string($xml, 'SimpleXMLElement', LIBXML_NOENT); // Noncompliant
$doc = new DOMDocument();
$doc->load('xxe.xml', LIBXML_NOENT); // Noncompliant
$reader = new XMLReader();
$reader->open('xxe.xml');
$reader->setParserProperty(XMLReader::SUBST_ENTITIES, true); // Noncompliant

Compliant solution

External entity substitution is disabled by default in simplexml_load_string() and DOMDocument::open().

$xml = file_get_contents('xxe.xml');
$doc = simplexml_load_string($xml, 'SimpleXMLElement');
$doc = new DOMDocument();
$doc->load('xxe.xml');
$reader = new XMLReader();
$reader->open('xxe.xml');
$reader->setParserProperty(XMLReader::SUBST_ENTITIES, false);

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

php:S4818

This rule is deprecated, and will eventually be removed.

Using sockets is security-sensitive. It has led in the past to the following vulnerabilities:

Sockets are vulnerable in multiple ways:

  • They enable a software to interact with the outside world. As this world is full of attackers it is necessary to check that they cannot receive sensitive information or inject dangerous input.
  • The number of sockets is limited and can be exhausted. Which makes the application unresponsive to users who need additional sockets.

This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections.

Ask Yourself Whether

  • sockets are created without any limit every time a user performs an action.
  • input received from sockets is used without being sanitized.
  • sensitive data is sent via sockets without being encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In many cases there is no need to open a socket yourself. Use instead libraries and existing protocols.
  • Encrypt all data sent if it is sensitive. Usually it is better to encrypt it even if the data is not sensitive as it might change later.
  • Sanitize any input read from the socket.
  • Limit the number of sockets a given user can create. Close the sockets as soon as possible.

Sensitive Code Example

function handle_sockets($domain, $type, $protocol, $port, $backlog, $addr, $hostname, $local_socket, $remote_socket, $fd) {
    socket_create($domain, $type, $protocol); // Sensitive
    socket_create_listen($port, $backlog); // Sensitive
    socket_addrinfo_bind($addr); // Sensitive
    socket_addrinfo_connect($addr); // Sensitive
    socket_create_pair($domain, $type, $protocol, $fd);

    fsockopen($hostname); // Sensitive
    pfsockopen($hostname); // Sensitive
    stream_socket_server($local_socket); // Sensitive
    stream_socket_client($remote_socket); // Sensitive
    stream_socket_pair($domain, $type, $protocol); // Sensitive
}

See

php:S2964

This rule is deprecated, and will eventually be removed.

Why is this an issue?

sleep is sometimes used in a mistaken attempt to prevent Denial of Service (DoS) attacks by throttling response rate. But because it ties up a thread, each request takes longer to serve that it otherwise would, making the application more vulnerable to DoS attacks, rather than less.

Noncompliant code example

if (is_bad_ip($requester)) {
  sleep(5);  // Noncompliant
}

Resources

php:S5328

If a session ID can be guessed (not generated with a secure pseudo random generator, or with insufficient length …​) an attacker may be able to hijack another user’s session.

Ask Yourself Whether

  • the session ID is not unique.
  • the session ID is set from a user-controlled input.
  • the session ID is generated with not secure pseudo random generator.
  • the session ID length is too short.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Don’t manually generate session IDs, use instead language based native functionality.

Sensitive Code Example

session_id(bin2hex(random_bytes(4))); // Sensitive: 4 bytes is too short
session_id($_POST["session_id"]); // Sensitive: session ID can be specified by the user

Compliant Solution

session_regenerate_id(); ; // Compliant
session_id(bin2hex(random_bytes(16))); // Compliant

See

php:S1523

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security.

This rule marks for review each occurrence of the eval function. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. The goal is to guide security code reviews.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

eval($code_to_be_dynamically_executed)

See

php:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

Exceptions

To securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:

  • Argon2
  • PBKDF2
  • Scrypt
  • Bcrypt

When they are used for password storage, using a secure, random salt is required.

However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted.

How to fix it in Core PHP

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

$salt = 'salty';
$hash = hash_pbkdf2('sha256', $password, $salt, 100000); // Noncompliant

Compliant solution

$salt = random_bytes(16);
$hash = hash_pbkdf2('sha256', $password, $salt, 100000);

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards.

Here, the compliant code example ensures the salt is random and has a sufficient length by calling the random_bytes function with a length parameter set to 16. This one internally uses a cryptographically secure pseudo random number generator.

Resources

Standards

php:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

chmod("foo", 0777); // Sensitive
umask(0); // Sensitive
umask(0750); // Sensitive

For Symfony Filesystem:

use Symfony\Component\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0777); // Sensitive

For Laravel Filesystem:

use Illuminate\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0777); // Sensitive

Compliant Solution

chmod("foo", 0750); // Compliant
umask(0027); // Compliant

For Symfony Filesystem:

use Symfony\Component\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0750); // Compliant

For Laravel Filesystem:

use Illuminate\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0750); // Compliant

See

php:S6345

External requests initiated by a WordPress server should be considered as security-sensitive. They may contain sensitive data which is stored in the files or in the database of the server. It’s important for the administrator of a WordPress server to understand what they contain and to which server they are sent.

WordPress makes it possible to block external requests by setting the WP_HTTP_BLOCK_EXTERNAL option to true. It’s then possible to authorize requests to only a few servers using another option named WP_ACCESSIBLE_HOSTS.

Ask Yourself Whether

  • Your WordPress website contains code which may call external requests to servers you don’t know.
  • Your WordPress website may send sensitive data to other servers.
  • Your WordPress website uses a lot of plugins or themes.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Uninstall WordPress plugins which send requests to servers you don’t know.
  • Make sure that WP_HTTP_BLOCK_EXTERNAL is defined in wp-config.php.
  • Make sure that WP_HTTP_BLOCK_EXTERNAL is set to true.
  • Make sure that WP_ACCESSIBLE_HOSTS is configured to authorize requests to the servers you trust.

Sensitive Code Example

define( 'WP_HTTP_BLOCK_EXTERNAL', false ); // Sensitive

Compliant Solution

define( 'WP_HTTP_BLOCK_EXTERNAL', true );
define( 'WP_ACCESSIBLE_HOSTS', 'api.wordpress.org' );

See

php:S6348

By default, the WordPress administrator and editor roles can add unfiltered HTML content in various places, such as post content. This includes the capability to add JavaScript code.

If an account with such a role gets hijacked, this capability can be used to plant malicious JavaScript code that gets executed whenever somebody visits the website.

Ask Yourself Whether

  • You really need the possibility to add unfiltered HTML with editor or administrator roles.
  • There’s a chance that the accounts of authorized users get compromised.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The unfiltered_html capability should be granted to trusted roles that need to use markup when publishing dynamic content to the WordPress website. If this capability is not required for all users, including administrators and editors roles, then it’s recommended to set DISALLOW_UNFILTERED_HTML to true.

Sensitive Code Example

define( 'DISALLOW_UNFILTERED_HTML', false ); // sensitive

Compliant Solution

define( 'DISALLOW_UNFILTERED_HTML', true );

See

php:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_connect($socket, '8.8.8.8', 23);  // Sensitive

Compliant Solution

$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_connect($socket, IP_ADDRESS, 23);  // Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

php:S6341

WordPress makes it possible to edit theme and plugin files directly in the Administration Screens. While it may look like an easy way to customize a theme or do a quick change, it’s a dangerous feature. When visiting the theme or plugin editor for the first time, WordPress displays a warning to make it clear that using such a feature may break the web site by mistake. More importantly, users who have access to this feature can trigger the execution of any PHP code and may therefore take full control of the WordPress instance. This security risk could be exploited by an attacker who manages to get access to one of the authorized users. Setting the DISALLOW_FILE_EDIT option to true in wp-config.php disables this risky feature. The default value is false.

Ask Yourself Whether

  • You really need to use the theme and plugin editors.
  • The theme and plugin editors are available to users who cannot be fully trusted.
  • There’s a chance that the accounts of authorized users get compromised.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Modify the theme and plugin files using a local editor and deploy them to the server in a secure way.
  • Make sure that DISALLOW_FILE_EDIT is defined in wp-config.php.
  • Make sure that DISALLOW_FILE_EDIT is set to true.

Sensitive Code Example

define( 'DISALLOW_FILE_EDIT', false ); // Sensitive

Compliant Solution

define( 'DISALLOW_FILE_EDIT', true );

See

php:S6343

Automatic updates are a great way of making sure your application gets security updates as soon as they are available. Once a vendor releases a security update, it is crucial to apply it in a timely manner before malicious actors exploit the vulnerability. Relying on manual updates is usually too late, especially if the application is publicly accessible on the internet.

Ask Yourself Whether

  • there is no specific reason for deactivating all automatic updates.
  • you meant to deactivate only automatic major updates.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Don’t deactivate automatic updates unless you have a good reason to do so. This way, you’ll be sure to receive security updates as soon as they are available. If you are worried about an automatic update breaking something, check if it is possible to only activate automatic updates for minor or security updates.

Sensitive Code Example

define( 'WP_AUTO_UPDATE_CORE', false ); // Sensitive
define( 'AUTOMATIC_UPDATER_DISABLED', true ); // Sensitive

Compliant Solution

define( 'WP_AUTO_UPDATE_CORE', true ); // Minor and major automatic updates enabled
define( 'WP_AUTO_UPDATE_CORE', 'minor' ); // Only minor updates are enabled
define( 'AUTOMATIC_UPDATER_DISABLED', false );

See

php:S6346

WordPress has a database repair and optimization mode that can be activated by setting WP_ALLOW_REPAIR to true in the configuration.

If activated, the repair page can be accessed by any user, authenticated or not. This makes sense because if the database is corrupted, the authentication mechanism might not work.

Malicious users could trigger this potentially costly operation repeatadly slowing down the website, and making it unavailable.

Ask Yourself Whether

  • The database is not currently corrupted.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to enable automatic database repair mode only in case of database corruption. This feature should be deactivated again when the database issue is resolved.

Sensitive Code Example

define( 'WP_ALLOW_REPAIR', true ); // Sensitive

Compliant Solution

// The default value is false, so the value does not have to be expilicitly set.
define( 'WP_ALLOW_REPAIR', false );

See

php:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

Builtin access to $argv

function globfunc() {
    global $argv; // Sensitive. Reference to global $argv
    foreach ($argv as $arg) { // Sensitive.
        // ...
    }
}

function myfunc($argv) {
    $param = $argv[0]; // OK. Reference to local $argv parameter
    // ...
}

foreach ($argv as $arg) { // Sensitive. Reference to $argv.
    // ...
}

$myargv = $_SERVER['argv']; // Sensitive. Equivalent to $argv.

function serve() {
    $myargv = $_SERVER['argv']; // Sensitive.
    // ...
}

myfunc($argv); // Sensitive

$myvar = $HTTP_SERVER_VARS[0]; // Sensitive. Note: HTTP_SERVER_VARS has ben removed since PHP 5.4.

$options = getopt('a:b:'); // Sensitive. Parsing arguments.

$GLOBALS["argv"]; // Sensitive. Equivalent to $argv.

function myglobals() {
    $GLOBALS["argv"]; // Sensitive
}

$argv = [1,2,3]; // Sensitive. It is a bad idea to override argv.

Zend Console

new Zend\Console\Getopt(['myopt|m' => 'this is an option']); // Sensitive

Getopt-php library

new \GetOpt\Option('m', 'myoption', \GetOpt\GetOpt::REQUIRED_ARGUMENT); // Sensitive

See

php:S4828

Signaling processes or process groups can seriously affect the stability of this application or other applications on the same system.

Accidentally setting an incorrect PID or signal or allowing untrusted sources to assign arbitrary values to these parameters may result in a denial of service.

Also, the system treats the signal differently if the destination PID is less than or equal to 0. This different behavior may affect multiple processes with the same (E)UID simultaneously if the call is left uncontrolled.

Ask Yourself Whether

  • The parameters pid and sig are untrusted (they come from an external source).
  • This function is triggered by non-administrators.
  • Signal handlers on the target processes stop important functions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For stateful applications with user management, ensure that only administrators trigger this code.
  • Verify that the pid and sig parameters are correct before using them.
  • Ensure that the process sending the signals runs with as few OS privileges as possible.
  • Isolate the process on the system based on its (E)UID.
  • Ensure that the signal does not interrupt any essential functions when intercepted by a target’s signal handlers.

Sensitive Code Example

$targetPid = (int)$_GET["pid"];
posix_kill($targetPid, 9); // Sensitive

Compliant Solution

$targetPid = (int)$_GET["pid"];

// Validate the untrusted PID,
// With a pre-approved list or authorization checks
if (isValidPid($targetPid)) {
    posix_kill($targetPid, 9);
}

See

php:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

// Any reference to STDIN is Sensitive
$varstdin = STDIN; // Sensitive
stream_get_line(STDIN, 40); // Sensitive
stream_copy_to_stream(STDIN, STDOUT); // Sensitive
// ...


// Except those references as they can't create an injection vulnerability.
ftruncate(STDIN, 5); // OK
ftell(STDIN); // OK
feof(STDIN); // OK
fseek(STDIN, 5); // OK
fclose(STDIN); // OK


// STDIN can also be referenced like this
$mystdin = 'php://stdin'; // Sensitive

file_get_contents('php://stdin'); // Sensitive
readfile('php://stdin'); // Sensitive

$input = fopen('php://stdin', 'r'); // Sensitive
fclose($input); // OK

See

php:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in cURL

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by setting CURLOPT_SSL_VERIFYPEER to false. To enable validation set the value to true or do not set CURLOPT_SSL_VERIFYPEER at all to use the secure default value.

Noncompliant code example

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false); // Noncompliant
curl_exec($curl);
curl_close($curl);

Compliant solution

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_exec($curl);
curl_close($curl);

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

php:S6339

Why is this an issue?

Secret keys are used in combination with an algorithm to encrypt data. A typical use case is an authentication system. For such a system to be secure, the secret key should have a value which cannot be guessed and which is long enough to not be vulnerable to brute-force attacks.

A "salt" is an extra piece of data which is included when hashing data such as a password. Its value should have the same properties as a secret key.

This rule raises an issue when it detects that a secret key or a salt has a predictable value or that it’s not long enough.

Noncompliant code example

WordPress:

define('AUTH_KEY', 'hello'); // Noncompliant
define('AUTH_SALT', 'hello'); // Noncompliant
define('AUTH_KEY', 'put your unique phrase here'); // Noncompliant, this is the default value

Compliant solution

WordPress:

define('AUTH_KEY', 'D&ovlU#|CvJ##uNq}bel+^MFtT&.b9{UvR]g%ixsXhGlRJ7q!h}XWdEC[BOKXssj');
define('AUTH_SALT', 'FIsAsXJKL5ZlQo)iD-pt??eUbdc{_Cn<4!d~yqz))&B D?AwK%)+)F2aNwI|siOe');

Resources

php:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

In php.ini you can specify the flags for the session cookie which is security-sensitive:

session.cookie_secure = 0; // Sensitive: this security-sensitive session cookie is created with the secure flag set to false (cookie_secure = 0)

Same thing in PHP code:

session_set_cookie_params($lifetime, $path, $domain, false);
// Sensitive: this security-sensitive session cookie is created with the secure flag (the fourth argument) set to _false_

If you create a custom security-sensitive cookie in your PHP code:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, false);  // Sensitive: a security-sensitive cookie is created with the secure flag  (the sixth argument) set to _false_

By default setcookie and setrawcookie functions set the sixth argument / secure flag to false:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain);  // Sensitive: a security-sensitive cookie is created with the secure flag (the sixth argument) not defined (by default to false)
setrawcookie($name, $value, $expire, $path, $domain);  // Sensitive: a security-sensitive cookie is created with the secure flag (the sixth argument) not defined (by default to false)

Compliant Solution

session.cookie_secure = 1; // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to cookie_secure property set to 1
session_set_cookie_params($lifetime, $path, $domain, true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the fouth argument) set to true
$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the sixth  argument) set to true
setrawcookie($name, $value, $expire, $path, $domain, true);// Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the sixth argument) set to true

See

php:S4834

This rule is deprecated, and will eventually be removed.

The access control of an application must be properly implemented in order to restrict access to resources to authorized entities otherwise this could lead to vulnerabilities:

Granting correct permissions to users, applications, groups or roles and defining required permissions that allow access to a resource is sensitive, must therefore be done with care. For instance, it is obvious that only users with administrator privilege should be authorized to add/remove the administrator permission of another user.

Ask Yourself Whether

  • Granted permission to an entity (user, application) allow access to information or functionalities not needed by this entity.
  • Privileges are easily acquired (eg: based on the location of the user, type of device used, defined by third parties, does not require approval …​).
  • Inherited permission, default permission, no privileges (eg: anonymous user) is authorized to access to a protected resource.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

At minimum, an access control system should:

  • Use a well-defined access control model like RBAC or ACL.
  • Entities' permissions should be reviewed regularly to remove permissions that are no longer needed.
  • Respect the principle of least privilege ("an entity has access only the information and resources that are necessary for its legitimate purpose").

Sensitive Code Example

CakePHP

use Cake\Auth\BaseAuthorize;
use Cake\Controller\Controller;

abstract class MyAuthorize extends BaseAuthorize { // Sensitive. Method extending Cake\Auth\BaseAuthorize.
    // ...
}

// Note that "isAuthorized" methods will only be detected in direct subclasses of Cake\Controller\Controller.
abstract class MyController extends Controller {
    public function isAuthorized($user) { // Sensitive. Method called isAuthorized in a Cake\Controller\Controller.
        return false;
    }
}

See

php:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

PHP built-in header function:

header("Access-Control-Allow-Origin: *"); // Sensitive

Laravel:

response()->header('Access-Control-Allow-Origin', "*"); // Sensitive

Symfony:

use Symfony\Component\HttpFoundation\Response;

$response = new Response(
    'Content',
    Response::HTTP_OK,
    ['Access-Control-Allow-Origin' => '*'] // Sensitive
);
$response->headers->set('Access-Control-Allow-Origin', '*'); // Sensitive

User-controlled origin:

use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\HttpFoundation\Request;

$origin = $request->headers->get('Origin');

$response->headers->set('Access-Control-Allow-Origin', $origin); // Sensitive

Compliant Solution

PHP built-in header function:

header("Access-Control-Allow-Origin: $trusteddomain");

Laravel:

response()->header('Access-Control-Allow-Origin', $trusteddomain);

Symfony:

use Symfony\Component\HttpFoundation\Response;

$response = new Response(
    'Content',
    Response::HTTP_OK,
    ['Access-Control-Allow-Origin' => $trusteddomain]
);

$response->headers->set('Access-Control-Allow-Origin', $trusteddomain);

User-controlled origin validated with an allow-list:

use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\HttpFoundation\Request;

$origin = $request->headers->get('Origin');

if (in_array($origin, $trustedOrigins)) {
    $response->headers->set('Access-Control-Allow-Origin', $origin);
}

See

php:S5808

When granting users access to resources of an application, such an authorization should be based on strong decisions. For instance, a user may be authorized to access a resource only if they are authenticated, or if they have the correct role and privileges.

Why is this an issue?

Access control is a critical aspect of web frameworks that ensures proper authorization and restricts access to sensitive resources or actions. To enable access control, web frameworks offer components that are responsible for evaluating user permissions and making access control decisions. They might examine the user’s credentials, such as roles or privileges, and compare them against predefined rules or policies to determine whether the user should be granted access to a specific resource or action.

Conventionally, these checks should never grant access to every request received. If an endpoint or component is meant to be public, then it should be ignored by access control components. Conversely, if an endpoint should deny some users from accessing it, then access control has to be configured correctly for this endpoint.

Granting unrestricted access to all users can lead to security vulnerabilities and potential misuse of critical functionalities. It is important to carefully assess access decisions based on factors such as user roles, resource sensitivity, and business requirements. Implementing a robust and granular access control mechanism is crucial for the security and integrity of the web application itself and its surrounding environment.

What is the potential impact?

Not verifying user access strictly can introduce significant security risks. Some of the most prominent risks are listed below. Depending on the use case, it is very likely that other risks are introduced on top of the ones listed.

Unauthorized access

As the access of users is not checked strictly, it becomes very easy for an attacker to gain access to restricted areas or functionalities, potentially compromising the confidentiality, integrity, and availability of sensitive resources. They may exploit this access to perform malicious actions, such as modifying or deleting data, impersonating legitimate users, or gaining administrative privileges, ultimately compromising the security of the system.

Theft of sensitive data

Theft of sensitive data can result from incorrect access control if attackers manage to gain access to databases, file systems, or other storage mechanisms where sensitive data is stored. This can lead to the theft of personally identifiable information (PII), financial data, intellectual property, or other confidential information. The stolen data can be used for various malicious purposes, such as identity theft, financial fraud, or selling the data on the black market, causing significant harm to individuals and organizations affected by the breach.

How to fix it in Symfony

Code examples

Noncompliant code example

The vote method of a VoterInterface implementation is not compliant when it returns only an affirmative decision (ACCESS_GRANTED):

class NoncompliantVoter implements VoterInterface
{
    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        return self::ACCESS_GRANTED; // Noncompliant
    }
}

The voteOnAttribute method of a Voter class is not compliant when it returns only an affirmative decision (true):

class NoncompliantVoter extends Voter
{
    protected function supports(string $attribute, $subject)
    {
        return true;
    }

    protected function voteOnAttribute(string $attribute, $subject, TokenInterface $token)
    {
        return true; // Noncompliant
    }
}

Compliant solution

The vote method of a VoterInterface type should return a negative decision (ACCESS_DENIED) or abstain from making a decision (ACCESS_ABSTAIN):

class CompliantVoter implements VoterInterface
{
    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        if (foo()) {
            return self::ACCESS_GRANTED;
        } else if (bar()) {
            return self::ACCESS_ABSTAIN;
        }
        return self::ACCESS_DENIED;
    }
}

The voteOnAttribute method of a Voter type should return a negative decision (false):

class CompliantVoter extends Voter
{
    protected function supports(string $attribute, $subject)
    {
        return true;
    }

    protected function voteOnAttribute(string $attribute, $subject, TokenInterface $token)
    {
        if (foo()) {
            return true;
        }
        return false;
    }
}

Resources

Standards

Web:S5148

A newly opened window having access back to the originating window could allow basic phishing attacks (the window.opener object is not null and thus window.opener.location can be set to a malicious website by the opened page).

For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials.

Ask Yourself Whether

  • The application opens untrusted external URL.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use noopener to prevent untrusted pages from abusing window.opener.

Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ target=_blank on anchors implies rel=noopener which make the protection enabled by default.

Sensitive Code Example

<a href="http://example.com/dangerous" target="_blank"> <!-- Sensitive -->

<a href="{{variable}}" target="_blank"> <!-- Sensitive -->

Compliant Solution

To prevent pages from abusing window.opener, use rel=noopener on <a href=> to force its value to be null on the opened pages.

<a href="http://petssocialnetwork.io" target="_blank" rel="noopener">

Exceptions

No Issue will be raised when href contains a hardcoded relative url as there it has less chances of being vulnerable. An url is considered hardcoded and relative if it doesn’t start with http:// or https://, and if it does not contain any of the characters {}$()[]

<a href="internal.html" target="_blank" >

See

Web:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

<!-- Django templates -->
<p>{{ variable|safe }}</p><!-- Sensitive -->
{% autoescape off %}<!-- Sensitive -->

<!-- Jinja2 templates -->
<p>{{ variable|safe }}</p><!-- Sensitive -->
{% autoescape false %}<!-- Sensitive -->

See

Web:S5725

Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application.

On the client side, where front-end code is executed, malicious code could:

  • impersonate users' identities and take advantage of their privileges on the application.
  • add quiet malware that monitors users' session and capture sensitive secrets.
  • gain access to sensitive clients' personal data.
  • deface, or otherwise affect the general availability of the application.
  • mine cryptocurrencies in the background.

Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:

  • access and modify sensitive technical and business data.
  • elevate its privileges on the underlying operating system.
  • Use the compromised application as a pivot to attack the local network.

By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes applied to it before it is downloaded.
Especially, integrity checks will allow for identifying an artifact replaced by malware on the publication website or that was legitimately changed by its author, in a more benign scenario.

Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself.

Ask Yourself Whether

  • The artifact is a file intended to execute code.
  • The artifact is a file that is intended to configure or affect running code in some way.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed.

In this case, the artifact’s hash must:

  • Be computed with a secure hash algorithm such as SHA512, SHA384 or SHA256.
  • Be compared with a secure hash that was not downloaded from the same source.

To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings.

Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes.

Sensitive Code Example

The following code sample uses neither integrity checks nor version pinning:

<script
    src="https://cdn.example.com/latest/script.js"
></script> <!-- Sensitive -->

Compliant Solution

<script
    src="https://cdn.example.com/v5.3.6/script.js"
    integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
></script>

See

ruby:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = "192.168.12.42"; // Sensitive

Compliant Solution

ip = IP_ADDRESS; // Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

ruby:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

See

javascript:S5732

Clickjacking attacks occur when an attacker try to trick an user to click on certain buttons/links of a legit website. This attack can take place with malicious HTML frames well hidden in an attacker website.

For instance, suppose a safe and authentic page of a social network (https://socialnetworkexample.com/makemyprofilpublic) which allows an user to change the visibility of his profile by clicking on a button. This is a critical feature with high privacy concerns. Users are generally well informed on the social network of the consequences of this action. An attacker can trick users, without their consent, to do this action with the below embedded code added on a malicious website:

<html>
<b>Click on the button below to win 5000$</b>
<br>
<iframe src="https://socialnetworkexample.com/makemyprofilpublic" width="200" height="200"></iframe>
</html>

Playing with the size of the iframe it’s sometimes possible to display only the critical parts of a page, in this case the button of the makemyprofilpublic page.

Ask Yourself Whether

  • Critical actions of the application are prone to clickjacking attacks because a simple click on a link or a button can trigger them.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy frame-ancestors directive which is supported by all modern browsers and will specify the origins of frame allowed to be loaded by the browser (this directive deprecates X-Frame-Options).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the frameAncestors directive (or if frameAncestors is set to 'none'):

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'none'"] // Sensitive: frameAncestors  is set to none
    }
  })
);

Compliant Solution

In Express.js application a standard way to implement CSP frame-ancestors directive is the helmet-csp or helmet middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'example.com'"] // Compliant
    }
  })
);

See

javascript:S5734

MIME confusion attacks occur when an attacker successfully tricks a web-browser to interpret a resource as a different type than the one expected. To correctly interpret a resource (script, image, stylesheet …​) web browsers look for the Content-Type header defined in the HTTP response received from the server, but often this header is not set or is set with an incorrect value. To avoid content-type mismatch and to provide the best user experience, web browsers try to deduce the right content-type, generally by inspecting the content of the resources (the first bytes). This "guess mechanism" is called MIME type sniffing.

Attackers can take advantage of this feature when a website ("example.com" here) allows to upload arbitrary files. In that case, an attacker can upload a malicious image fakeimage.png (containing malicious JavaScript code or a polyglot content file) such as:

<script>alert(document.cookie)</script>

When the victim will visit the website showing the uploaded image, the malicious script embedded into the image will be executed by web browsers performing MIME type sniffing.

Ask Yourself Whether

  • Content-Type header is not systematically set for all resources.
  • Content of resources can be controlled by users.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Implement X-Content-Type-Options header with nosniff value (the only existing value for this header) which is supported by all modern browsers and will prevent browsers from performing MIME type sniffing, so that in case of Content-Type header mismatch, the resource is not interpreted. For example within a <script> object context, JavaScript MIME types are expected (like application/javascript) in the Content-Type header.

Sensitive Code Example

In Express.js application the code is sensitive if, when using helmet, the noSniff middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet({
    noSniff: false, // Sensitive
  })
);

Compliant Solution

When using helmet in an Express.js application, the noSniff middleware should be enabled (it is also done by default):

const express = require('express');
const helmet= require('helmet');

let app = express();

app.use(helmet.noSniff());

See

javascript:S5730

A mixed-content is when a resource is loaded with the HTTP protocol, from a website accessed with the HTTPs protocol, thus mixed-content are not encrypted and exposed to MITM attacks and could break the entire level of protection that was desired by implementing encryption with the HTTPs protocol.

The main threat with mixed-content is not only the confidentiality of resources but the whole website integrity:

  • A passive mixed-content (eg: <img src="http://example.com/picture.png">) allows an attacker to access and replace only these resources, like images, with malicious ones that could lead to successful phishing attacks.
  • With active mixed-content (eg: <script src="http://example.com/library.js">) an attacker can compromise the entire website by injecting malicious javascript code for example (accessing and modifying the DOM, steal cookies, etc).

Ask Yourself Whether

  • The HTTPS protocol is in place and external resources are fetched from the website pages.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy block-all-mixed-content directive which is supported by all modern browsers and will block loading of mixed-contents.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com']
    } // Sensitive: blockAllMixedContent directive is missing
  })
);

Compliant Solution

In Express.js application a standard way to block mixed-content is to put in place the helmet-csp or helmet middleware with the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com'],
      blockAllMixedContent: [] // Compliant
    }
  })
);

See

javascript:S5736

HTTP header referer contains a URL set by web browsers and used by applications to track from where the user came from, it’s for instance a relevant value for web analytic services, but it can cause serious privacy and security problems if the URL contains confidential information. Note that Firefox for instance, to prevent data leaks, removes path information in the Referer header while browsing privately.

Suppose an e-commerce website asks the user his credit card number to purchase a product:

<html>
<body>
<form action="/valid_order" method="GET">
Type your credit card number to purchase products:
<input type=text id="cc" value="1111-2222-3333-4444">
<input type=submit>
</form>
</body>

When submitting the above HTML form, a HTTP GET request will be performed, the URL requested will be https://example.com/valid_order?cc=1111-2222-3333-4444 with credit card number inside and it’s obviously not secure for these reasons:

  • URLs are stored in the history of browsers.
  • URLs could be accidentally shared when doing copy/paste actions.
  • URLs can be stolen if a malicious person looks at the computer screen of an user.

In addition to these threats, when further requests will be performed from the "valid_order" page with a simple legitimate embedded script like that:

<script src="https://webanalyticservices_example.com/track">

The referer header which contains confidential information will be send to a third party web analytic service and cause privacy issue:

GET /track HTTP/2.0
Host: webanalyticservices_example.com
Referer: https://example.com/valid_order?cc=1111-2222-3333-4444

Ask Yourself Whether

  • Confidential information exists in URLs.
  • Semantic of HTTP methods is not respected (eg: use of a GET method instead of POST when the state of the application is changed).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Confidential information should not be set inside URLs (GET requests) of the application and a safe (ie: different from unsafe-url or no-referrer-when-downgrade) referrer-Policy header, to control how much information is included in the referer header, should be used.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet referrerPolicy middleware is disabled or used with no-referrer-when-downgrade or unsafe-url:

const express = require('express');
const helmet = require('helmet');

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer-when-downgrade' // Sensitive: no-referrer-when-downgrade is used
  })
);

Compliant Solution

In Express.js application a secure solution is to user the helmet referrer policy middleware set to no-referrer:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer' // Compliant
  })
);

See

javascript:S5739

When implementing the HTTPS protocol, the website mostly continue to support the HTTP protocol to redirect users to HTTPS when they request a HTTP version of the website. These redirects are not encrypted and are therefore vulnerable to man in the middle attacks. The Strict-Transport-Security policy header (HSTS) set by an application instructs the web browser to convert any HTTP request to HTTPS.

Web browsers that see the Strict-Transport-Security policy header for the first time record information specified in the header:

  • the max-age directive which specify how long the policy should be kept on the web browser.
  • the includeSubDomains optional directive which specify if the policy should apply on all sub-domains or not.
  • the preload optional directive which is not part of the HSTS specification but supported on all modern web browsers.

With the preload directive the web browser never connects in HTTP to the website and to use this directive, it is required to submit the concerned application to a preload service maintained by Google.

Ask Yourself Whether

  • The website is accessible with the unencrypted HTTP protocol.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Strict-Transport-Security policy header, it is recommended to apply this policy to all subdomains (includeSubDomains) and for at least 6 months (max-age=15552000) or even better for 1 year (max-age=31536000).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet or hsts middleware are disabled or used without recommended values:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 3153600, // Sensitive, recommended >= 15552000
  includeSubDomains: false // Sensitive, recommended 'true'
}));

Compliant Solution

In Express.js application a standard way to implement HSTS is with the helmet or hsts middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 31536000,
  includeSubDomains: true
})); // Compliant

See

javascript:S5743

This rule is deprecated, and will eventually be removed.

By default, web browsers perform DNS prefetching to reduce latency due to DNS resolutions required when an user clicks links from a website page.

For instance on example.com the hyperlink below contains a cross-origin domain name that must be resolved to an IP address by the web browser:

<a href="https://otherexample.com">go on our partner website</a>

It can add significant latency during requests, especially if the page contains many links to cross-origin domains. DNS prefetch allows web browsers to perform DNS resolving in the background before the user clicks a link. This feature can cause privacy issues because DNS resolving from the user’s computer is performed without his consent if he doesn’t intent to go to the linked website.

On a complex private webpage, a combination "of unique links/DNS resolutions" can indicate, to a eavesdropper for instance, that the user is visiting the private page.

Ask Yourself Whether

  • Links to cross-origin domains could result in leakage of confidential information about the user’s navigation/behavior of the website.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement X-DNS-Prefetch-Control header with an off value but this could significantly degrade website performances.

Sensitive Code Example

In Express.js application the code is sensitive if the dns-prefetch-control middleware is disabled or used without the recommended value:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: true // Sensitive: allowing DNS prefetching is security-sensitive
  })
);

Compliant Solution

In Express.js application the dns-prefetch-control or helmet middleware is the standard way to implement X-DNS-Prefetch-Control header:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: false // Compliant
  })
);

See

javascript:S5852

Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars).

This rule determines the runtime complexity of a regular expression and informs you if it is not linear.

Ask Yourself Whether

  • The input is user-controlled.
  • The input size is not restricted to a small number of characters.
  • There is no timeout in place to limit the regex evaluation time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression.

In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen.

  • If you have a repetition r* or r*?, such that the regex r could produce different possible matches (of possibly different lengths) on the same input, the worst case matching time can be exponential. This can be the case if r contains optional parts, alternations or additional repetitions (but not if the repetition is written in such a way that there’s only one way to match it).
  • If you have multiple repetitions that can match the same contents and are consecutive or are only separated by an optional separator or a separator that can be matched by both of the repetitions, the worst case matching time can be polynomial (O(n^c) where c is the number of problematic repetitions). For example a*b* is not a problem because a* and b* match different things and a*_a* is not a problem because the repetitions are separated by a '_' and can’t match that '_'. However, a*a* and .*_.* have quadratic runtime.
  • If the regex is not anchored to the beginning of the string, quadratic runtime is especially hard to avoid because whenever a match fails, the regex engine will try again starting at the next index. This means that any unbounded repetition, if it’s followed by a pattern that can fail, can cause quadratic runtime on some inputs. For example str.split(/\s*,/) will run in quadratic time on strings that consist entirely of spaces (or at least contain large sequences of spaces, not followed by a comma).

In order to rewrite your regular expression without these patterns, consider the following strategies:

  • If applicable, define a maximum number of expected repetitions using the bounded quantifiers, like {1,5} instead of + for instance.
  • Refactor nested quantifiers to limit the number of way the inner group can be matched by the outer quantifier, for instance this nested quantifier situation (ba+)+ doesn’t cause performance issues, indeed, the inner group can be matched only if there exists exactly one b char per repetition of the group.
  • Optimize regular expressions by emulating possessive quantifiers and atomic grouping.
  • Use negated character classes instead of . to exclude separators where applicable. For example the quadratic regex .*_.* can be made linear by changing it to [^_]*_.*

Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when the regex is not anchored to the beginning of the string, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:

  • Solve the problem without regular expressions
  • Use an alternative non-backtracking regex implementations such as Google’s RE2 or node-re2.
  • Use multiple passes. This could mean pre- and/or post-processing the string manually before/after applying the regular expression to it or using multiple regular expressions. One example of this would be to replace str.split(/\s*,\s*/) with str.split(",") and then trimming the spaces from the strings as a second step.
  • It is often possible to make the regex infallible by making all the parts that could fail optional, which will prevent backtracking. Of course this means that you’ll accept more strings than intended, but this can be handled by using capturing groups to check whether the optional parts were matched or not and then ignoring the match if they weren’t. For example the regex x*y could be replaced with x*(y)? and then the call to str.match(regex) could be replaced with matched = str.match(regex) and matched[1] !== undefined.

Sensitive Code Example

The regex evaluation will never end:

/(a+)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Sensitive

Compliant Solution

Possessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues. Unfortunately, they are not supported in JavaScript, but one can still mimick them using lookahead assertions and backreferences:

/((?=(a+))\2)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Compliant

See

javascript:S2598

Why is this an issue?

If the file upload feature is implemented without proper folder restriction, it will result in an implicit trust violation within the server, as trusted files will be implicitly stored alongside third-party files that should be considered untrusted.

This can allow an attacker to disrupt the security of an internal server process or the running application.

What is the potential impact?

After discovering this vulnerability, attackers may attempt to upload as many different file types as possible, such as javascript files, bash scripts, malware, or malicious configuration files targeting potential processes.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Full application compromise

In the worst-case scenario, the attackers succeed in uploading a file recognized by in an internal tool, triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

Server Resource Exhaustion

By repeatedly uploading large files, an attacker can consume excessive server resources, resulting in a denial of service.

If the component affected by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service can only affect the attacker who caused it.

Even though a denial of service might have little direct impact, it can have secondary impact in architectures that use containers and container orchestrators. For example, it can cause unexpected container failures or overuse of resources.

In some cases, it is also possible to force the product to "fail open" when resources are exhausted, which means that some security features are disabled in an emergency.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Formidable

Code examples

Noncompliant code example

const Formidable = require('formidable');

const form          = new Formidable(); // Noncompliant
form.uploadDir      = "/tmp/";
form.keepExtensions = true;

Compliant solution

const Formidable = require('formidable');

const form          = new Formidable();
form.uploadDir      = "/uploads/";
form.keepExtensions = false;

How does this work?

Use pre-approved folders

Create a special folder where untrusted data should be stored. This folder should be classified as untrusted and have the following characteristics:

  • It should have specific read and write permissions that belong to the right people or organizations.
  • It should have a size limit or its size should be monitored.
  • It should contain backup copies if it contains data that belongs to users.

This folder should not be located in /tmp, /var/tmp or in the Windows directory %TEMP%.
These folders are usually "world-writable", can be manipulated, and can be accidentally deleted by the system.

Also, the original file names and extensions should be changed to controlled strings to prevent unwanted code from being executed based on the file names.

Resources

javascript:S5742

Certificate Transparency (CT) is an open-framework to protect against identity theft when certificates are issued. Certificate Authorities (CA) electronically sign certificate after verifying the identify of the certificate owner. Attackers use, among other things, social engineering attacks to trick a CA to correctly verifying a spoofed identity/forged certificate.

CAs implement Certificate Transparency framework to publicly log the records of newly issued certificates, allowing the public and in particular the identity owner to monitor these logs to verify that his identify was not usurped.

Ask Yourself Whether

  • The website identity is valuable and well-known, therefore prone to theft.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Expect-CT HTTP header which instructs the web browser to check public CT logs in order to verify if the website appears inside and if it is not, the browser will block the request and display a warning to the user.

Sensitive Code Example

In Express.js application the code is sensitive if the expect-ct middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
    helmet({
      expectCt: false // Sensitive
    })
);

Compliant Solution

In Express.js application the expect-ct middleware is the standard way to implement expect-ct. Usually, the deployment of this policy starts with the report only mode (enforce: false) and with a low maxAge (the number of seconds the policy will apply) value and next if everything works well it is recommended to block future connections that violate Expect-CT policy (enforce: true) and greater value for maxAge directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.expectCt({
  enforce: true,
  maxAge: 86400
})); // Compliant

See

javascript:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

Express.js CSURF middleware protection is not found on an unsafe HTTP method like POST method:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie: true });

let app = express();

// Sensitive: this operation doesn't look like protected by CSURF middleware (csrfProtection is not used)
app.post('/money_transfer', parseForm, function (req, res) {
  res.send('Money transferred');
});

Protection provided by Express.js CSURF middleware is globally disabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["POST", "GET"] })); // Sensitive as POST is unsafe method

Compliant Solution

Express.js CSURF middleware protection is used on unsafe methods:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie:  true });

let app = express();

app.post('/money_transfer', parseForm, csrfProtection, function (req, res) { // Compliant
  res.send('Money transferred')
});

Protection provided by Express.js CSURF middleware is enabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["GET"] })); // Compliant

See

javascript:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

errorhandler Express.js middleware should not be used in production:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();
app.use(errorhandler()); // Sensitive

Compliant Solution

errorhandler Express.js middleware used only in development mode:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();

if (process.env.NODE_ENV === 'development') {
  app.use(errorhandler());
}

See

javascript:S5604

Powerful features are browser features (geolocation, camera, microphone …​) that can be accessed with JavaScript API and may require a permission granted by the user. These features can have a high impact on privacy and user security thus they should only be used if they are really necessary to implement the critical parts of an application.

This rule highlights intrusive permissions when requested with the future standard (but currently experimental) web browser query API and specific APIs related to the permission. It is highly recommended to customize this rule with the permissions considered as intrusive in the context of the web application.

Ask Yourself Whether

  • Some powerful features used by the application are not really necessary.
  • Users are not clearly informed why and when powerful features are used by the application.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In order to respect user privacy it is recommended to avoid using intrusive powerful features.

Sensitive Code Example

When using geolocation API, Firefox for example retrieves personal information like nearby wireless access points and IP address and sends it to the default geolocation service provider, Google Location Services:

navigator.permissions.query({name:"geolocation"}).then(function(result) {
});  // Sensitive: geolocation is a powerful feature with high privacy concerns

navigator.geolocation.getCurrentPosition(function(position) {
  console.log("coordinates x="+position.coords.latitude+" and y="+position.coords.longitude);
}); // Sensitive: geolocation is a powerful feature with high privacy concerns

Compliant Solution

If geolocation is required, always explain to the user why the application needs it and prefer requesting an approximate location when possible:

<html>
<head>
    <title>
        Retailer website example
    </title>
</head>
<body>
    Type a city, street or zip code where you want to retrieve the closest retail locations of our products:
    <form method=post>
        <input type=text value="New York"> <!-- Compliant -->
    </form>
</body>
</html>

See

javascript:S5725

Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application.

On the client side, where front-end code is executed, malicious code could:

  • impersonate users' identities and take advantage of their privileges on the application.
  • add quiet malware that monitors users' session and capture sensitive secrets.
  • gain access to sensitive clients' personal data.
  • deface, or otherwise affect the general availability of the application.
  • mine cryptocurrencies in the background.

Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:

  • access and modify sensitive technical and business data.
  • elevate its privileges on the underlying operating system.
  • Use the compromised application as a pivot to attack the local network.

By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes applied to it before it is downloaded.
Especially, integrity checks will allow for identifying an artifact replaced by malware on the publication website or that was legitimately changed by its author, in a more benign scenario.

Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself.

Ask Yourself Whether

  • The artifact is a file intended to execute code.
  • The artifact is a file that is intended to configure or affect running code in some way.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed.

In this case, the artifact’s hash must:

  • Be computed with a secure hash algorithm such as SHA512, SHA384 or SHA256.
  • Be compared with a secure hash that was not downloaded from the same source.

To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings.

Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes.

Sensitive Code Example

The following code sample uses neither integrity checks nor version pinning:

let script = document.createElement("script");
script.src = "https://cdn.example.com/latest/script.js"; // Sensitive
script.crossOrigin = "anonymous";
document.head.appendChild(script);

Compliant Solution

let script = document.createElement("script");
script.src = "https://cdn.example.com/v5.3.6/script.js";
script.integrity = "sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC";
script.crossOrigin = "anonymous";
document.head.appendChild(script);

See

javascript:S5728

Content security policy (CSP) (fetch directives) is a W3C standard which is used by a server to specify, via a http header, the origins from where the browser is allowed to load resources. It can help to mitigate the risk of cross site scripting (XSS) attacks and reduce privileges used by an application. If the website doesn’t define CSP header the browser will apply same-origin policy by default.

Content-Security-Policy: default-src 'self'; script-src ‘self ‘ http://www.example.com

In the above example, all resources are allowed from the website where this header is set and script resources fetched from example.com are also authorized:

<img src="selfhostedimage.png></script> <!-- will be loaded because default-src 'self'; directive is applied  -->
<img src="http://www.example.com/image.png></script>  <!-- will NOT be loaded because default-src 'self'; directive is applied  -->
<script src="http://www.example.com/library.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->
<script src="selfhostedscript.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.com directive is applied  -->
<script src="http://www.otherexample.com/library.js></script> <!-- will NOT be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->

Ask Yourself Whether

  • The resources of the application are fetched from various untrusted locations.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy fetch directives, in particular default-src directive and continue to properly sanitize and validate all inputs of the application, indeed CSP fetch directives is only a tool to reduce the impact of cross site scripting attacks.

Sensitive Code Example

In a Express.js application, the code is sensitive if the helmet contentSecurityPolicy middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(
    helmet({
      contentSecurityPolicy: false, // sensitive
    })
);

Compliant Solution

In a Express.js application, a standard way to implement CSP is the helmet contentSecurityPolicy middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(helmet.contentSecurityPolicy()); // Compliant

See

javascript:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext.

Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution.

For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-128-CBC", key, iv); // Noncompliant

Compliant solution

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: use authenticated encryption modes

The best-known authenticated encryption mode for AES is Galois/Counter mode (GCM).

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

javascript:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

const crypto = require('crypto');

crypto.createCipheriv("DES", key, iv); // Noncompliant

Compliant solution

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

javascript:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in jsonwebtoken

Code examples

The following code contains examples of JWT encoding and decoding without a strong cipher algorithm.

Noncompliant code example

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'none' }); // Noncompliant
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['none'] // Noncompliant
}, callbackcheck);

Compliant solution

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'HS256' });
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['HS256']
}, callbackcheck);

How does this work?

Always sign your tokens

The foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created.

Choose a strong cipher algorithm

It is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens.

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue, follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take when encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

javascript:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the Math.random() function relies on a weak pseudorandom number generator, this function should not be used for security-critical applications or for protecting sensitive data. In such context, a cryptographically strong pseudorandom number generator (CSPRNG) should be used instead.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a cryptographically strong pseudorandom number generator (CSPRNG) like crypto.getRandomValues().
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

const val = Math.random(); // Sensitive
// Check if val is used in a security context.

Compliant Solution

// === Client side ===
const crypto = window.crypto || window.msCrypto;
var array = new Uint32Array(1);
crypto.getRandomValues(array); // Compliant for security-sensitive use cases

// === Server side ===
const crypto = require('crypto');
const buf = crypto.randomBytes(1); // Compliant for security-sensitive use cases

See

javascript:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

Noncompliant code example

NodeJs offers multiple ways to set weak TLS protocols. For https and tls, these options are used and are used in other third-party libraries as well.

The first is secureProtocol:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
 secureProtocol: 'TLSv1_method' // Noncompliant
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

The second is the combination of minVersion and maxVerison. Note that they cannot be specified along with the secureProtocol option:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.1',  // Noncompliant
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

And secureOptions, which in this example instructs the OpenSSL protocol to turn off some algorithms altogether. In general, this option might trigger side effects and should be used carefully, if used at all.

const https     = require('node:https');
const tls       = require('node:tls');
const constants = require('node:crypto'):

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
}; // Noncompliant

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Compliant solution

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });
const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.2',
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Here, the goal is to turn on only TLSv1.2 and higher, by turning off all lower versions:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
    | constants.SSL_OP_NO_TLSv1_1
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

javascript:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp112r2', // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Compliant solution

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp224k1',
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms is mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC-based algorithms.

Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:

  • NIST P curves with a size of at least 224 bits, e.g. secp256r1.
  • Curve25519, generally known as ed25519 or x25519 depending on its application.
  • Curve448.
  • Brainpool curves with a size of at least 224 bits, e.g. brainpoolP224r1

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

javascript:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

// === Client side ===
crypto.subtle.encrypt(algo, key, plainData); // Sensitive
crypto.subtle.decrypt(algo, key, encData); // Sensitive
// === Server side ===
const crypto = require("crypto");
const cipher = crypto.createCipher(algo, key); // Sensitive
const cipheriv = crypto.createCipheriv(algo, key, iv); // Sensitive
const decipher = crypto.createDecipher(algo, key); // Sensitive
const decipheriv = crypto.createDecipheriv(algo, key, iv); // Sensitive
const pubEnc = crypto.publicEncrypt(key, buf); // Sensitive
const privDec = crypto.privateDecrypt({ key: key, passphrase: secret }, pubEnc); // Sensitive
const privEnc = crypto.privateEncrypt({ key: key, passphrase: secret }, buf); // Sensitive
const pubDec = crypto.publicDecrypt(key, privEnc); // Sensitive

See

javascript:S5876

An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled.

Why is this an issue?

Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:

  • When a user visits a website or logs in, a session is created for them.
  • This session is assigned a unique session identifier, stored in a cookie, in local storage, or through URL parameters.
  • In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. For example, the attacker sends the victim an email containing a link with this predetermined session identifier.
  • When the victim clicks on the link, the web application does not create a new session identifier but uses this identifier known to the attacker.
  • At this point, the attacker can hijack and impersonate the victim’s session.

What is the potential impact?

Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following:

Impersonation

Once an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf.

Data Breach

If an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes.

Privilege Escalation

In some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems.

How to fix it in Passport

Code examples

Upon user authentication, it is crucial to regenerate the session identifier to prevent fixation attacks. Passport provides a mechanism to achieve this by using the req.session.regenerate() method. By calling this method after successful authentication, you can ensure that each user is assigned a new and unique session ID.

Noncompliant code example

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    // Noncompliant - no session.regenerate after login
    res.redirect('/');
  });

Compliant solution

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    let prevSession = req.session;
    req.session.regenerate((err) => {
      Object.assign(req.session, prevSession);
      res.redirect('/');
    });
  });

How does this work?

The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process.

Here’s how session fixation protection typically works:

  1. When a user visits a website or logs in, a session is created for them. This session is assigned a unique session identifier, which is stored in a cookie or passed through URL parameters.
  2. In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. This allows the attacker to potentially gain unauthorized access to the user’s session.
  3. To protect against session fixation attacks, session fixation protection mechanisms come into play during the authentication process. When a user successfully authenticates, this mechanism generates a new session identifier for the user’s session.
  4. The old session identifier, which may have been manipulated by the attacker, is invalidated and no longer associated with the user’s session. This ensures that any attempts by the attacker to use the fixed session identifier are rendered ineffective.
  5. The user is then assigned the new session identifier, which is used for subsequent requests and session tracking. This new session identifier is typically stored in a new session cookie or passed through URL parameters.

By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process.

Resources

Documentation

Articles & blog posts

Standards

javascript:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

cookie-session module:

let session = cookieSession({
  httpOnly: false,// Sensitive
});  // Sensitive

express-session module:

const express = require('express'),
const session = require('express-session'),

let app = express()
app.use(session({
  cookie:
  {
    httpOnly: false // Sensitive
  }
})),

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: false // Sensitive
}); // Sensitive

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: false }}); // Sensitive

Compliant Solution

cookie-session module:

let session = cookieSession({
  httpOnly: true,// Compliant
});  // Compliant

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    httpOnly: true // Compliant
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: true // Compliant
}); // Compliant

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: true }}); // Compliant

See

javascript:S4784

This rule is deprecated; use S5852 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as (a+)+s will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following characters: *+{.

Example: (a+)*

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Sensitive Code Example

const regex = /(a+)+b/; // Sensitive
const regex2 = new RegExp("(a+)+b"); // Sensitive

str.search("(a+)+b"); // Sensitive
str.match("(a+)+b"); // Sensitive
str.split("(a+)+b"); // Sensitive

Note: String.matchAll does not raise any issue as it is not supported by NodeJS.

Exceptions

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

javascript:S5757

Log management is an important topic, especially for the security of a web application, to ensure user activity, including potential attackers, is recorded and available for an analyst to understand what’s happened on the web application in case of malicious activities.

Retention of specific logs for a defined period of time is often necessary to comply with regulations such as GDPR, PCI DSS and others. However, to protect user’s privacy, certain informations are forbidden or strongly discouraged from being logged, such as user passwords or credit card numbers, which obviously should not be stored or at least not in clear text.

Ask Yourself Whether

In a production environment:

  • The web application uses confidential information and logs a significant amount of data.
  • Logs are externalized to SIEM or Big Data repositories.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Loggers should be configured with a list of confidential, personal information that will be hidden/masked or removed from logs.

Sensitive Code Example

With Signale log management framework the code is sensitive when an empty list of secrets is defined:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: []         // empty list of secrets
};

const logger = new Signale(options); // Sensitive

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

Compliant Solution

With Signale log management framework it is possible to define a list of secrets that will be hidden in logs:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: ["([0-9]{4}-?)+"]
};

const logger = new Signale(options); // Compliant

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

See

javascript:S2255

This rule is deprecated, and will eventually be removed.

Using cookies is security-sensitive. It has led in the past to the following vulnerabilities:

Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed.

This rule flags code that writes cookies.

Ask Yourself Whether

  • sensitive information is stored inside the cookie.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Cookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session.

Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed.

Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies.

As a side note, every information read from a cookie should be Sanitized.

Sensitive Code Example

// === Built-in NodeJS modules ===
const http = require('http');
const https = require('https');

http.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
https.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
// === ExpressJS ===
const express = require('express');
const app = express();
app.use(function(req, res, next) {
  res.cookie('name', 'John'); // Sensitive
});
// === In browser ===
// Set cookie
document.cookie = "name=John"; // Sensitive

See

javascript:S5759

Users often connect to web servers through HTTP proxies.

Proxy can be configured to forward the client IP address via the X-Forwarded-For or Forwarded HTTP headers.

IP address is a personal information which can identify a single user and thus impact his privacy.

Ask Yourself Whether

  • The web application uses reverse proxies or similar but doesn’t need to know the IP address of the user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

User IP address should not be forwarded unless the application needs it, as part of an authentication, authorization scheme or log management for examples.

Sensitive Code Example

node-http-proxy

var httpProxy = require('http-proxy');

httpProxy.createProxyServer({target:'http://localhost:9000', xfwd:true}) // Noncompliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true, xfwd: true })); // Noncompliant
app.listen(3000);

Compliant Solution

node-http-proxy

var httpProxy = require('http-proxy');

// By default xfwd option is false
httpProxy.createProxyServer({target:'http://localhost:9000'}) // Compliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

// By default xfwd option is false
app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true})); // Compliant
app.listen(3000);

See

javascript:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

const crypto = require("crypto");

const hash = crypto.createHash('sha1'); // Sensitive

Compliant Solution

const crypto = require("crypto");

const hash = crypto.createHash('sha512'); // Compliant

See

javascript:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Node.js

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by overriding checkServerIdentity with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

javascript:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in libxmljs

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml, {
    noblanks: true,
    noent: true, // Noncompliant
    nocdata: true
});

Compliant solution

parseXmlString is safe by default.

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml);

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

javascript:S4817

This rule is deprecated, and will eventually be removed.

Executing XPATH expressions is security-sensitive. It has led in the past to the following vulnerabilities:

User-provided data such as URL parameters should always be considered as untrusted and tainted. Constructing XPath expressions directly from tainted data enables attackers to inject specially crafted values that changes the initial meaning of the expression itself. Successful XPath injections attacks can read sensitive information from the XML document.

Ask Yourself Whether

  • the XPATH expression might contain some unsafe input coming from a user.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize any user input before using it in an XPATH expression.

Sensitive Code Example

// === Server side ===

var xpath = require('xpath');
var xmldom = require('xmldom');

var doc = new xmldom.DOMParser().parseFromString(xml);
var nodes = xpath.select(userinput, doc); // Sensitive
var node = xpath.select1(userinput, doc); // Sensitive
// === Client side ===

// Chrome, Firefox, Edge, Opera, and Safari use the evaluate() method to select nodes:
var nodes = document.evaluate(userinput, xmlDoc, null, XPathResult.ANY_TYPE, null); // Sensitive

// Internet Explorer uses its own methods to select nodes:
var nodes = xmlDoc.selectNodes(userinput); // Sensitive
var node = xmlDoc.SelectSingleNode(userinput); // Sensitive

See

javascript:S4818

This rule is deprecated, and will eventually be removed.

Using sockets is security-sensitive. It has led in the past to the following vulnerabilities:

Sockets are vulnerable in multiple ways:

  • They enable a software to interact with the outside world. As this world is full of attackers it is necessary to check that they cannot receive sensitive information or inject dangerous input.
  • The number of sockets is limited and can be exhausted. Which makes the application unresponsive to users who need additional sockets.

This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections.

Ask Yourself Whether

  • sockets are created without any limit every time a user performs an action.
  • input received from sockets is used without being sanitized.
  • sensitive data is sent via sockets without being encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In many cases there is no need to open a socket yourself. Use instead libraries and existing protocols.
  • Encrypt all data sent if it is sensitive. Usually it is better to encrypt it even if the data is not sensitive as it might change later.
  • Sanitize any input read from the socket.
  • Limit the number of sockets a given user can create. Close the sockets as soon as possible.

Sensitive Code Example

const net = require('net');

var socket = new net.Socket(); // Sensitive
socket.connect(80, 'google.com');

// net.createConnection creates a new net.Socket, initiates connection with socket.connect(), then returns the net.Socket that starts the connection
net.createConnection({ port: port }, () => {}); // Sensitive

// net.connect is an alias to net.createConnection
net.connect({ port: port }, () => {}); // Sensitive

See

javascript:S1523

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security.

This rule raises issues on calls to eval and Function constructor. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. The goal is to guide security code reviews.

The rule also flags string literals starting with javascript: as the code passed in javascript: URLs is evaluated the same way as calls to eval or Function constructor.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

let value = eval('obj.' + propName); // Sensitive
let func = Function('obj' + propName); // Sensitive
location.href = 'javascript:void(0)'; // Sensitive

Exceptions

This rule will not raise an issue when the argument of the eval or Function is a literal string as it is reasonably safe.

See

javascript:S1525

This rule is deprecated; use S4507 instead.

Why is this an issue?

The debugger statement can be placed anywhere in procedures to suspend execution. Using the debugger statement is similar to setting a breakpoint in the code. By definition such statement must absolutely be removed from the source code to prevent any unexpected behavior or added vulnerability to attacks in production.

Noncompliant code example

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
  // Wait for user to resume.
  debugger;
}

Compliant solution

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
}

Resources

javascript:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o777); // Sensitive
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o000); // Sensitive

Compliant Solution

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o770); // Compliant
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o007); // Compliant

See

javascript:S4721

Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands.

Ask Yourself Whether

  • OS command name or parameters are user-controlled.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use functions that don’t spawn a shell.

Sensitive Code Example

const cp = require('child_process');

// A shell will be spawn in these following cases:
cp.exec(cmd); // Sensitive
cp.execSync(cmd); // Sensitive

cp.spawn(cmd, { shell: true }); // Sensitive
cp.spawnSync(cmd, { shell: true }); // Sensitive
cp.execFile(cmd, { shell: true }); // Sensitive
cp.execFileSync(cmd, { shell: true }); // Sensitive

Compliant Solution

const cp = require('child_process');

cp.spawnSync("/usr/bin/file.exe", { shell: false }); // Compliant

See

javascript:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = "192.168.12.42"; // Sensitive

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Compliant Solution

ip = process.env.IP_ADDRESS; // Compliant

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID).
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the 2001:db8::/32 range, reserved for documentation purposes by RFC 3849

See

javascript:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
var param = process.argv[2]; // Sensitive: check how the argument is used
console.log('Param: ' + param);

See

javascript:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
// All uses of process.stdin are security-sensitive and should be reviewed

process.stdin.on('readable', () => {
	const chunk = process.stdin.read(); // Sensitive
	if (chunk !== null) {
		dosomething(chunk);
	}
});

const readline = require('readline');
readline.createInterface({
	input: process.stdin // Sensitive
}).on('line', (input) => {
	dosomething(input);
});

See

javascript:S1442

This rule is deprecated; use S4507 instead.

Why is this an issue?

alert(...) as well as confirm(...) and prompt(...) can be useful for debugging during development, but in production mode this kind of pop-up could expose sensitive information to attackers, and should never be displayed.

Noncompliant code example

if(unexpectedCondition) {
  alert("Unexpected Condition");
}

Resources

javascript:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Node.js

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by setting rejectUnauthorized to false. To enable validation set the value to true or do not set rejectUnauthorized at all to use the secure default value.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  rejectUnauthorized: false,
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
}); // Noncompliant
const tls = require('node:tls');

let options = {
    rejectUnauthorized: false,
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
}); // Noncompliant

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

javascript:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PUBLIC_READ, PUBLIC_READ_WRITE grant respectively "read" and "read and write" privileges to anyone, either authenticated or anonymous (AllUsers group).
  • AUTHENTICATED_READ grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, i.e., to only grant users the necessary permissions for their required tasks. In the context of canned ACL, set it to PRIVATE (the default one), and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users, either authenticated or anonymous, have read and write permissions with the PUBLIC_READ_WRITE access control:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

Compliant Solution

With the PRIVATE access control (default), only the bucket owner has the read/write permissions on the bucket and its ACL.

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

See

javascript:S6268

Angular prevents XSS vulnerabilities by treating all values as untrusted by default. Untrusted values are systematically sanitized by the framework before they are inserted into the DOM.

Still, developers have the ability to manually mark a value as trusted if they are sure that the value is already sanitized. Accidentally trusting malicious data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

  • The value for which sanitization has been disabled is user-controlled.
  • It’s difficult to understand how this value is constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid including dynamic executable code and thus disabling Angular’s built-in sanitization unless it’s absolutely necessary. Try instead to rely as much as possible on static templates and Angular built-in sanitization to define web page content.
  • Make sure to understand how the value to consider as trusted is constructed and never concatenate it with user-controlled data.
  • Make sure to choose the correct DomSanitizer "bypass" method based on the context. For instance, only use bypassSecurityTrustUrl to trust urls in an href attribute context.

Sensitive Code Example

import { Component, OnInit } from '@angular/core';
import { DomSanitizer, SafeHtml } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello" [innerHTML]="hello"></div>'
})
export class HelloComponent implements OnInit {
  hello: SafeHtml;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    let name = this.route.snapshot.queryParams.name;
    let html = "<h1>Hello " + name + "</h1>";
    this.hello = this.sanitizer.bypassSecurityTrustHtml(html); // Sensitive
  }
}

Compliant Solution

import { Component, OnInit } from '@angular/core';
import { DomSanitizer } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello"><h1>Hello {{name}}</h1></div>',
})
export class HelloComponent implements OnInit {
  name: string;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    this.name = this.route.snapshot.queryParams.name;
  }
}

See

javascript:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For tar module:

const tar = require('tar');

tar.x({ // Sensitive
  file: 'foo.tar.gz'
});

For adm-zip module:

const AdmZip = require('adm-zip');

let zip = new AdmZip("./foo.zip");
zip.extractAllTo("."); // Sensitive

For jszip module:

const fs = require("fs");
const JSZip = require("jszip");

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) { // Sensitive
    zip.forEach(function (relativePath, zipEntry) {
      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(zipEntry.name);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          fs.writeFileSync(zipEntry.name, content);
        });
      }
    });
  });
});

For yauzl module

const yauzl = require('yauzl');

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  zipfile.on("entry", function(entry) {
    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

For extract-zip module:

const extract = require('extract-zip')

async function main() {
  let target = __dirname + '/test';
  await extract('test.zip', { dir: target }); // Sensitive
}
main();

Compliant Solution

For tar module:

const tar = require('tar');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;

tar.x({
  file: 'foo.tar.gz',
  filter: (path, entry) => {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    totalSize += entry.size;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    return true;
  }
});

For adm-zip module:

const AdmZip = require('adm-zip');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

let fileCount = 0;
let totalSize = 0;
let zip = new AdmZip("./foo.zip");
let zipEntries = zip.getEntries();
zipEntries.forEach(function(zipEntry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
    }

    let entrySize = zipEntry.getData().length;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
    }

    let compressionRatio = entrySize / zipEntry.header.compressedSize;
    if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
    }

    if (!zipEntry.isDirectory) {
        zip.extractEntryTo(zipEntry.entryName, ".");
    }
});

For jszip module:

const fs = require("fs");
const pathmodule = require("path");
const JSZip = require("jszip");

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;
let targetDirectory = __dirname + '/archive_tmp';

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) {
    zip.forEach(function (relativePath, zipEntry) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // Prevent ZipSlip path traversal (S6096)
      const resolvedPath = pathmodule.join(targetDirectory, zipEntry.name);
      if (!resolvedPath.startsWith(targetDirectory)) {
        throw 'Path traversal detected';
      }

      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(resolvedPath);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          totalSize += content.length;
          if (totalSize > MAX_SIZE) {
            throw 'Reached max. size';
          }

          fs.writeFileSync(resolvedPath, content);
        });
      }
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For yauzl module

const yauzl = require('yauzl');

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  let fileCount = 0;
  let totalSize = 0;

  zipfile.on("entry", function(entry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
    // Alternatively, calculate the size from the readStream.
    let entrySize = entry.uncompressedSize;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    if (entry.compressedSize > 0) {
      let compressionRatio = entrySize / entry.compressedSize;
      if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
      }
    }

    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For extract-zip module:

const extract = require('extract-zip')

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

async function main() {
  let fileCount = 0;
  let totalSize = 0;

  let target = __dirname + '/foo';
  await extract('foo.zip', {
    dir: target,
    onEntry: function(entry, zipfile) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
      // Alternatively, calculate the size from the readStream.
      let entrySize = entry.uncompressedSize;
      totalSize += entrySize;
      if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
      }

      if (entry.compressedSize > 0) {
        let compressionRatio = entrySize / entry.compressedSize;
        if (compressionRatio > THRESHOLD_RATIO) {
          throw 'Reached max. compression ratio';
        }
      }
    }
  });
}
main();

See

javascript:S6245

This rule is deprecated, and will eventually be removed.

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'default'
}); // Sensitive

Bucket encryption is disabled by default.

Compliant Solution

Server-side encryption with Amazon S3-Managed Keys is used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED
});

# Alternatively with a KMS key managed by the user.

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED,
    encryptionKey: access_key
});

See

javascript:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enforce HTTPS only access by setting enforceSSL property to true

Sensitive Code Example

S3 bucket objects access through TLS is not enforced by default:

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example'); // Sensitive

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example', {
    bucketName: 'example',
    versioned: true,
    publicReadAccess: false,
    enforceSSL: true
});

See

javascript:S6252

S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: false // Sensitive
});

The default value of versioned is false so the absence of this parameter is also sensitive.

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: true
});

See

javascript:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AnyPrincipal()] // Sensitive
}))

Compliant Solution

This policy allows only the authorized users:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AccountRootPrincipal()]
}))

See

javascript:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-explicit', {
      availabilityZone: 'us-west-2a',
      size: Size.gibibytes(1),
      encrypted: false // Sensitive
    });
import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-implicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
    }); // Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'encrypted-explicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
      encrypted: true
    });

See

javascript:S2817

This rule is deprecated, and will eventually be removed.

Why is this an issue?

The Web SQL Database standard never saw the light of day. It was first formulated, then deprecated by the W3C and was only implemented in some browsers. (It is not supported in Firefox or IE.)

Further, the use of a Web SQL Database poses security concerns, since you only need its name to access such a database.

Noncompliant code example

var db = window.openDatabase("myDb", "1.0", "Personal secrets stored here", 2*1024*1024);  // Noncompliant

Resources

javascript:S2819

Cross-origin communication allows different websites to interact with each other. This interaction is typically achieved through mechanisms like AJAX requests, WebSockets, or postMessage API. However, a vulnerability can arise when these communications are not properly secured by verifying their origins.

Why is this an issue?

Without origin verification, the target website cannot distinguish between legitimate requests from its own pages and malicious requests from an attacker’s site. The attacker can craft a malicious website or script that sends requests to a target website where the user is already authenticated.

This vulnerability class is not about a single specific user input or action, but rather a series of actions that lead to an insecure cross-origin communication.

What is the potential impact?

The absence of origin verification during cross-origin communications can lead to serious security issues.

Data Breach

If an attacker can successfully exploit this vulnerability, they may gain unauthorized access to sensitive data. For instance, a user’s personal information, financial details, or other confidential data could be exposed. This not only compromises the user’s privacy but can also lead to identity theft or financial loss.

Unauthorized Actions

An attacker could manipulate the communication between websites to perform actions on behalf of the user without their knowledge. This could range from making unauthorized purchases to changing user settings or even deleting accounts.

How to fix it

When sending a message, avoid using * for the target origin (it means no preference). Instead define it explicitly so the message will only be dispatched to this URI. When receiving the message, verify the orgin to be sure that it is sent by an authorized sender.

Code examples

Noncompliant code example

When sending a message:

var iframe = document.getElementById("testiframe");
iframe.contentWindow.postMessage("hello", "*"); // Noncompliant: * is used

When receiving a message:

window.addEventListener("message", function(event) { // Noncompliant: no checks are done on the origin property.
  console.log(event.data);
 });

Compliant solution

When sending a message:

var iframe = document.getElementById("testiframe");
iframe.contentWindow.postMessage("hello", "https://secure.example.com");

When receiving a message:

window.addEventListener("message", function(event) {
  if (event.origin !== "http://example.org")
    return;

  console.log(event.data)
});

Resources

Documentation

Standards

javascript:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies or ACLs to be set on a S3 bucket the following booleans settings can be enabled:

  • blockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • ignorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • blockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • restrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

The other attribute BlockPublicAccess.BLOCK_ACLS only turns on blockPublicAcls and ignorePublicAcls. The public policies can still affect the S3 bucket.

However, all of those options can be enabled by setting the blockPublicAccess property of the S3 bucket to BlockPublicAccess.BLOCK_ALL.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • blockPublicAcls to True to block new attempts to set public ACLs.
  • ignorePublicAcls to True to block existing public ACLs.
  • blockPublicPolicy to True to block new attempts to set public policies.
  • restrictPublicBuckets to True to restrict existing public policies.

Sensitive Code Example

By default, when not set, the blockPublicAccess is fully deactivated (nothing is blocked):

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket'
}); // Sensitive

This block_public_access allows public ACL to be set:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : false, // Sensitive
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

The attribute BLOCK_ACLS only blocks and ignores public ACLs:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ACLS // Sensitive
});

Compliant Solution

This blockPublicAccess blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL
});

A similar configuration to the one above can be obtained by setting all parameters of the blockPublicAccess

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : true,
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

See

javascript:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

var mysql = require('mysql');

var connection = mysql.createConnection(
{
  host:'localhost',
  user: "admin",
  database: "project",
  password: "mypassword", // sensitive
  multipleStatements: true
});

connection.connect();

Compliant Solution

var mysql = require('mysql');

var connection = mysql.createConnection({
  host: process.env.MYSQL_URL,
  user: process.env.MYSQL_USERNAME,
  password: process.env.MYSQL_PASSWORD,
  database: process.env.MYSQL_DATABASE
});
connection.connect();

See

javascript:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

url = "http://example.com"; // Sensitive
url = "ftp://anonymous@example.com"; // Sensitive
url = "telnet://anonymous@example.com"; // Sensitive

For nodemailer:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: false, // Sensitive
  requireTLS: false // Sensitive
});
const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({}); // Sensitive

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': false // Sensitive
});

For telnet-client:

const Telnet = require('telnet-client'); // Sensitive

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-http-default', {
  port: 8080,
  open: true
}); // Sensitive

alb.addListener('listener-http-explicit', {
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-http-explicit-constructor', {
  loadBalancer: alb,
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

var listenerNLB = nlb.addListener('listener-tcp-default', {
  port: 1234
}); // Sensitive

listenerNLB = nlb.addListener('listener-tcp-explicit', {
  protocol: Protocol.TCP, // Sensitive
  port: 1234
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tcp-explicit-constructor', {
  loadBalancer: nlb,
  protocol: Protocol.TCP, // Sensitive
  port: 8080
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-http', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "HTTP", // Sensitive
  port: 80
});

new CfnListener(this, 'listener-tcp', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "TCP", // Sensitive
  port: 80
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-tcp', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'tcp' // Sensitive
  }]
});

new CfnLoadBalancer(this, 'elb-http', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'http' // Sensitive
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

const loadBalancer = new LoadBalancer(this, 'elb-tcp-dict', {
    vpc,
    internetFacing: true,
    healthCheck: {
    port: 80,
    },
    listeners: [
    {
        externalPort:10000,
        externalProtocol: LoadBalancingProtocol.TCP, // Sensitive
        internalPort:10000
    }]
});

loadBalancer.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.TCP, // Sensitive
  internalPort:10001
});
loadBalancer.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTP, // Sensitive
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'unencrypted-implicit', {
  replicationGroupDescription: 'exampleDescription'
}); // Sensitive

new CfnReplicationGroup(this, 'unencrypted-explicit', {
  replicationGroupDescription: 'exampleDescription',
  transitEncryptionEnabled: false // Sensitive
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-implicit-unencrytped', undefined); // Sensitive

new CfnStream(this, 'cfnstream-explicit-unencrytped', {
  streamEncryption: undefined // Sensitive
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-explicit-unencrypted', {
  encryption: StreamEncryption.UNENCRYPTED // Sensitive
});

Compliant Solution

url = "https://example.com";
url = "sftp://anonymous@example.com";
url = "ssh://anonymous@example.com";

For nodemailer one of the following options must be set:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: true,
  requireTLS: true,
  port: 465,
  secured: true
});

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-https-explicit', {
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

alb.addListener('listener-https-implicit', {
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-https-explicit', {
  loadBalancer: loadBalancer,
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

nlb.addListener('listener-tls-explicit', {
  protocol: Protocol.TLS,
  port: 1234,
  certificates: [certificate]
});

nlb.addListener('listener-tls-implicit', {
  port: 1234,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tls-explicit', {
  loadBalancer: loadBalancer,
  protocol: Protocol.TLS,
  port: 8080,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-https', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "HTTPS",
  port: 80
  certificates: [certificate]
});

new CfnListener(this, 'listener-tls', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "TLS",
  port: 80
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-ssl', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'ssl',
    sslCertificateId: sslCertificateId
  }]
});

new CfnLoadBalancer(this, 'elb-https', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'https',
    sslCertificateId: sslCertificateId
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer, LoadBalancingProtocol } from 'aws-cdk-lib/aws-elasticloadbalancing';

const lb = new LoadBalancer(this, 'elb-ssl', {
  vpc,
  internetFacing: true,
  healthCheck: {
    port: 80,
  },
  listeners: [
    {
      externalPort:10000,
      externalProtocol:LoadBalancingProtocol.SSL,
      internalPort:10000
    }]
});

lb.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.SSL,
  internalPort:10001
});
lb.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTPS,
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'encrypted-explicit', {
  replicationGroupDescription: 'example',
  transitEncryptionEnabled: true
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-implicit-encrypted');

new Stream(this, 'stream-explicit-encrypted-selfmanaged', {
  encryption: StreamEncryption.KMS,
  encryptionKey: encryptionKey,
});

new Stream(this, 'stream-explicit-encrypted-managed', {
  encryption: StreamEncryption.MANAGED
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-explicit-encrypted', {
  streamEncryption: {
    encryptionType: encryptionType,
    keyId: encryptionKey.keyId,
  }
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

javascript:S6299

Vue.js framework prevents XSS vulnerabilities by automatically escaping HTML contents with the use of native API browsers like innerText instead of innerHtml.

It’s still possible to explicity use innerHtml and similar APIs to render HTML. Accidentally rendering malicious HTML data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

The application needs to render HTML content which:

  • could be user-controlled and not previously sanitized.
  • is difficult to understand how it was constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid injecting HTML content with v-html directive unless the content can be considered 100% safe, instead try to rely as much as possible on built-in auto-escaping Vue.js features.
  • Take care when using the v-bind:href directive to set URLs which can contain malicious Javascript (javascript:onClick(...)).
  • Event directives like :onmouseover are also prone to Javascript injection and should not be used with unsafe values.

Sensitive Code Example

When using Vue.js templates, the v-html directive enables HTML rendering without any sanitization:

<div v-html="htmlContent"></div> <!-- Noncompliant -->

When using a rendering function, the innerHTML attribute enables HTML rendering without any sanitization:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerHTML: this.htmlContent, // Noncompliant
        }
      }
    );
  },
});

When using JSX, the domPropsInnerHTML attribute enables HTML rendering without any sanitization:

<div domPropsInnerHTML={this.htmlContent}></div> <!-- Noncompliant -->

Compliant Solution

When using Vue.js templates, putting the content as a child node of the element is safe:

<div>{{ htmlContent }}</div>

When using a rendering function, using the innerText attribute or putting the content as a child node of the element is safe:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerText: this.htmlContent,
        }
      },
      this.htmlContent // Child node
    );
  },
});

When using JSX, putting the content as a child node of the element is safe:

<div>{this.htmlContent}</div>

See

javascript:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';
import { aws_ec2 as ec2 } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: false, // Sensitive
});

Compliant Solution

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: true,
});

See

javascript:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

The wildcard "*" is specified as the resource for this PolicyStatement. This grants the update permission for all policies of the account:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["*"] // Sensitive
        })
    ]
})

Compliant Solution

Restrict the update permission to the appropriate subset of policies:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["arn:aws:iam:::policy/team1/*"]
        })
    ]
})

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

javascript:S5691

Hidden files are created automatically by many tools to save user-preferences, well-known examples are .profile, .bashrc, .bash_history or .git. To simplify the view these files are not displayed by default using operating system commands like ls.

Outside of the user environment, hidden files are sensitive because they are used to store privacy-related information or even hard-coded secrets.

Ask Yourself Whether

  • Hidden files may have been inadvertently uploaded to the static server’s public directory and it accepts requests to hidden files.
  • There is no business use cases linked to serve files in .name format but the server is not configured to reject requests to this type of files.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Disable the serving of hidden files.

Sensitive Code Example

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'allow'});   // Sensitive
app.use(serveStaticMiddleware);

Compliant Solution

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'ignore'});   // Compliant: ignore or deny are recommended values
let serveStaticDefault = serveStatic('public', { 'index': false});   // Compliant: by default, "dotfiles" (file or directory that begins with a dot) are not served (with the exception that files within a directory that begins with a dot are not ignored), see serve-static module documentation
app.use(serveStaticMiddleware);

See

javascript:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB

const formDefault = new Formidable(); // Sensitive, the default value is 200MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
    fileSize: 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB
  }
});

let diskUploadUnlimited = multer({ // Sensitive: the default value is no limit
  storage: diskStorage,
});

body-parser module:

// 4MB is more than the recommended limit of 2MB for non-file-upload requests
let jsonParser = bodyParser.json({ limit: "4mb" }); // Sensitive
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "4mb" }); // Sensitive

Compliant Solution

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 8000000; // Compliant: 8MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
     fileSize: 8000000 // Compliant: 8MB
  }
});

body-parser module:

let jsonParser = bodyParser.json(); // Compliant, when the limit is not defined, the default value is set to 100kb
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "2mb" }); // Compliant

See

javascript:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy that grants all permissions by using the wildcard (*) in the Action property:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["*"], // Sensitive
    resources: ["arn:aws:iam:::user/*"],
})

Compliant Solution

A customer-managed policy that grants only the required permissions:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["iam:GetAccountSummary"],
    resources: ["arn:aws:iam:::user/*"],
})

See

javascript:S6308

Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated.

To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, adversaries cannot access the data if they gain physical access to the storage medium.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt OpenSearch domains that contain sensitive information.

OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
}); // Sensitive, encryption must be explicitly enabled

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
}); // Sensitive, encryption must be explicitly enabled

Compliant Solution

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
  encryptionAtRest: {
    enabled: true,
  },
});

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
  encryptionAtRestOptions: {
    enabled: true,
  },
});

See

javascript:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive

Compliant Solution

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT name FROM users WHERE id = ?', [userinput], (err, res) => {});
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT name FROM users WHERE id = $1', [userinput], (err, res) => {});

Exceptions

This rule’s current implementation does not follow variables. It will only detect SQL queries which are formatted directly in the function call.

const sql = 'SELECT * FROM users WHERE id = ' + userinput;
mycon.query(sql, (err, res) => {}); // Sensitive but no issue is raised.

See

javascript:S6317

Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access.

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

For such policies, it is easy to define very broad permissions (by using wildcard "*" permissions for example.) This is especially true if it is not yet clear which permissions will be required for a specific workload or use case. However, it is important to limit the amount of permissions that are granted and the amount of resources to which these permissions are granted. Doing so ensures that there are no users or roles that have more permissions than they need.

If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities.

What is the potential impact?

AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope.

Privilege escalation

When IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities.

For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account.

How to fix it in AWS CDK

Code examples

In this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

Noncompliant code example

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["*"], // Noncompliant
    })],
});

Compliant solution

The policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed.

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["arn:aws:lambda:us-east-2:123456789012:function:my-function:1"],
    })],
});

How does this work?

Principle of least privilege

When creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else.

To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used.

Resources

Documentation

Articles & blog posts

Standards

javascript:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

new CfnNotebookInstance(this, 'example', {
      instanceType: 'instanceType',
      roleArn: 'roleArn'
}); // Sensitive

Compliant Solution

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

const encryptionKey = new Key(this, 'example', {
    enableKeyRotation: true,
});
new CfnNotebookInstance(this, 'example', {
    instanceType: 'instanceType',
    roleArn: 'roleArn',
    kmsKeyId: encryptionKey.keyId
});

See

javascript:S5689

Disclosure of version information, usually overlooked by developers but disclosed by default by the systems and frameworks in use, can pose a significant security risk depending on the production environement.

Once this information is public, attackers can use it to identify potential security holes or vulnerabilities specific to that version.

Furthermore, if the published version information indicates the use of outdated or unsupported software, it becomes easier for attackers to exploit known vulnerabilities. They can search for published vulnerabilities related to that version and launch attacks that specifically target those vulnerabilities.

Ask Yourself Whether

  • Version information is accessible to end users.
  • Internal systems do not benefit from timely patch management workflows.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

In general, it is recommended to keep internal technical information within internal systems to control what attackers know about the underlying architectures. This is known as the "need to know" principle.

The most effective solution is to remove version information disclosure from what end users can see, such as the "x-powered-by" header.
This can be achieved directly through the web application code, server (nginx, apache) or firewalls.

Disabling the server signature provides additional protection by reducing the amount of information available to attackers. Note, however, that this does not provide as much protection as regular updates and patches.
Security by obscurity is the least foolproof solution of all. It should never be the only defense mechanism and should always be combined with other security measures.

Sensitive Code Example

In Express.js, version information is disclosed by default in the x-powered-by HTTP header:

let express = require('express');

let example = express(); // Sensitive

example.get('/', function (req, res) {
  res.send('example')
});

Compliant Solution

x-powered-by HTTP header should be disabled in Express.js with app.disable:

let express = require('express');

let example = express();
example.disable("x-powered-by");

Or with helmet’s hidePoweredBy middleware:

let helmet = require("helmet");

let example = express();
example.use(helmet.hidePoweredBy());

See

javascript:S5148

A newly opened window having access back to the originating window could allow basic phishing attacks (the window.opener object is not null and thus window.opener.location can be set to a malicious website by the opened page).

For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials.

Ask Yourself Whether

  • The application opens untrusted external URL.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use noopener to prevent untrusted pages from abusing window.opener.

Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ target=_blank on anchors implies rel=noopener which make the protection enabled by default.

Sensitive Code Example

window.open("https://example.com/dangerous"); // Sensitive

Compliant Solution

window.open("https://example.com/dangerous", "WindowName", "noopener");

See

javascript:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

const fs = require('fs');

let tmp_file = "/tmp/temporary_file"; // Sensitive
fs.readFile(tmp_file, 'utf8', function (err, data) {
  // ...
});
const fs = require('fs');

let tmp_dir = process.env.TMPDIR; // Sensitive
fs.readFile(tmp_dir + "/temporary_file", 'utf8', function (err, data) {
  // ...
});

Compliant Solution

const tmp = require('tmp');

const tmpobj = tmp.fileSync(); // Compliant

See

javascript:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

const cp = require('child_process');
cp.exec('file.exe'); // Sensitive

Compliant Solution

const cp = require('child_process');
cp.exec('/usr/bin/file.exe'); // Compliant

See

javascript:S6321

Why is this an issue?

Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

For aws-cdk-lib.aws_ec2.Instance and other constructs that support a connections attribute:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const instance = new ec2.Instance(this, "default-own-security-group",{
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    instanceName: "test-instance"
})

instance.connections.allowFrom(
    ec2.Peer.anyIpv4(), // Noncompliant
    ec2.Port.tcp(22),
    /*description*/ "Allows SSH from all IPv4"
)

For aws-cdk-lib.aws_ec2.SecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const securityGroup = new ec2.SecurityGroup(this, "custom-security-group", {
    vpc: vpc
})

securityGroup.addIngressRule(
    ec2.Peer.anyIpv4(), // Noncompliant
    ec2.Port.tcpRange(1, 1024)
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroup(
    this,
    "cfn-based-security-group", {
        groupDescription: "cfn based security group",
        groupName: "cfn-based-security-group",
        vpcId: vpc.vpcId,
        securityGroupIngress: [
            {
                ipProtocol: "6",
                cidrIp: "0.0.0.0/0", // Noncompliant
                fromPort: 22,
                toPort: 22
            }
        ]
    }
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroupIngress( // Noncompliant
    this,
    "ingress-all-ip-tcp-ssh", {
        ipProtocol: "tcp",
        cidrIp: "0.0.0.0/0",
        fromPort: 22,
        toPort: 22,
        groupId: securityGroup.attrGroupId
})

Compliant solution

For aws-cdk-lib.aws_ec2.Instance and other constructs that support a connections attribute:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const instance = new ec2.Instance(this, "default-own-security-group",{
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    instanceName: "test-instance"
})

instance.connections.allowFrom(
    ec2.Peer.ipv4("192.0.2.0/24"),
    ec2.Port.tcp(22),
    /*description*/ "Allows SSH from a trusted range"
)

For aws-cdk-lib.aws_ec2.SecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const securityGroup3 = new ec2.SecurityGroup(this, "custom-security-group", {
    vpc: vpc
})

securityGroup3.addIngressRule(
    ec2.Peer.anyIpv4(),
    ec2.Port.tcpRange(1024, 1048)
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroup(
    this,
    "cfn-based-security-group", {
        groupDescription: "cfn based security group",
        groupName: "cfn-based-security-group",
        vpcId: vpc.vpcId,
        securityGroupIngress: [
            {
                ipProtocol: "6",
                cidrIp: "192.0.2.0/24",
                fromPort: 22,
                toPort: 22
            }
        ]
    }
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress

new ec2.CfnSecurityGroupIngress(
    this,
    "ingress-all-ipv4-tcp-http", {
        ipProtocol: "6",
        cidrIp: "0.0.0.0/0",
        fromPort: 80,
        toPort: 80,
        groupId: securityGroup.attrGroupId
    }
)

Resources

Documentation

Standards

javascript:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sns.Topic

import { Topic } from 'aws-cdk-lib/aws-sns';

new Topic(this, 'exampleTopic'); // Sensitive

For aws_cdk.aws_sns.CfnTopic

import { Topic, CfnTopic } from 'aws-cdk-lib/aws-sns';

new CfnTopic(this, 'exampleCfnTopic'); // Sensitive

Compliant Solution

For aws_cdk.aws_sns.Topic

import { Topic } from 'aws-cdk-lib/aws-sns';

const encryptionKey = new Key(this, 'exampleKey', {
    enableKeyRotation: true,
});

new Topic(this, 'exampleTopic', {
    masterKey: encryptionKey
});

For aws_cdk.aws_sns.CfnTopic

import { CfnTopic } from 'aws-cdk-lib/aws-sns';

const encryptionKey = new Key(this, 'exampleKey', {
    enableKeyRotation: true,
});

cfnTopic = new CfnTopic(this, 'exampleCfnTopic', {
    kmsMasterKeyId: encryptionKey.keyId
});

See

javascript:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

For aws-cdk-lib.aws_ec2.Instance and similar constructs:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.Instance(this, "example", {
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    vpcSubnets: {subnetType: ec2.SubnetType.PUBLIC} // Sensitive
})

For aws-cdk-lib.aws_ec2.CfnInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnInstance(this, "example", {
    instanceType: "t2.micro",
    imageId: "ami-0ea0f26a6d50850c5",
    networkInterfaces: [
        {
            deviceIndex: "0",
            associatePublicIpAddress: true, // Sensitive
            deleteOnTermination: true,
            subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PUBLIC}).subnetIds[0]
        }
    ]
})

For aws-cdk-lib.aws_dms.CfnReplicationInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new dms.CfnReplicationInstance(
    this, "example", {
    replicationInstanceClass: "dms.t2.micro",
    allocatedStorage: 5,
    publiclyAccessible: true, // Sensitive
    replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier,
    vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup]
})

For aws-cdk-lib.aws_rds.CfnDBInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const rdsSubnetGroupPublic = new rds.CfnDBSubnetGroup(this, "publicSubnet", {
    dbSubnetGroupDescription: "Subnets",
    dbSubnetGroupName: "publicSn",
    subnetIds: vpc.selectSubnets({
        subnetType: ec2.SubnetType.PUBLIC
    }).subnetIds
})

new rds.CfnDBInstance(this, "example", {
    engine: "postgres",
    masterUsername: "foobar",
    masterUserPassword: "12345678",
    dbInstanceClass: "db.r5.large",
    allocatedStorage: "200",
    iops: 1000,
    dbSubnetGroupName: rdsSubnetGroupPublic.ref,
    publiclyAccessible: true, // Sensitive
    vpcSecurityGroups: [sg.securityGroupId]
})

Compliant Solution

For aws-cdk-lib.aws_ec2.Instance and similar constructs:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.Instance(
    this,
    "example", {
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    vpcSubnets: {subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}
})

For aws-cdk-lib.aws_ec2.CfnInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnInstance(this, "example", {
    instanceType: "t2.micro",
    imageId: "ami-0ea0f26a6d50850c5",
    networkInterfaces: [
        {
            deviceIndex: "0",
            associatePublicIpAddress: false,
            deleteOnTermination: true,
            subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}).subnetIds[0]
        }
    ]
})

For aws-cdk-lib.aws_dms.CfnReplicationInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new dms.CfnReplicationInstance(
    this, "example", {
    replicationInstanceClass: "dms.t2.micro",
    allocatedStorage: 5,
    publiclyAccessible: false,
    replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier,
    vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup]
})

For aws-cdk-lib.aws_rds.CfnDBInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const rdsSubnetGroupPrivate = new rds.CfnDBSubnetGroup(this, "example",{
    dbSubnetGroupDescription: "Subnets",
    dbSubnetGroupName: "privateSn",
    subnetIds: vpc.selectSubnets({
        subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS
    }).subnetIds
})

new rds.CfnDBInstance(this, "example", {
    engine: "postgres",
    masterUsername: "foobar",
    masterUserPassword: "12345678",
    dbInstanceClass: "db.r5.large",
    allocatedStorage: "200",
    iops: 1000,
    dbSubnetGroupName: rdsSubnetGroupPrivate.ref,
    publiclyAccessible: false,
    vpcSecurityGroups: [sg.securityGroupId]
})

See

javascript:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

For aws-cdk-lib.aws_apigateway.Resource:

import {aws_apigateway as apigateway} from "aws-cdk-lib"

const resource = api.root.addResource("example")
resource.addMethod(
    "GET",
    new apigateway.HttpIntegration("https://example.org"),
    {
        authorizationType: apigateway.AuthorizationType.NONE // Sensitive
    }
)

For aws-cdk-lib.aws_apigatewayv2.CfnRoute:

import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib"

new apigateway.CfnRoute(this, "no-auth", {
    apiId: api.ref,
    routeKey: "GET /no-auth",
    authorizationType: "NONE", // Sensitive
    target: exampleIntegration
})

Compliant Solution

For aws-cdk-lib.aws_apigateway.Resource:

import {aws_apigateway as apigateway} from "aws-cdk-lib"

const resource = api.root.addResource("example",{
    defaultMethodOptions:{
        authorizationType: apigateway.AuthorizationType.IAM
    }
})
resource.addMethod(
    "POST",
    new apigateway.HttpIntegration("https://example.org"),
    {
        authorizationType: apigateway.AuthorizationType.IAM
    }
)
resource.addMethod(  // authorizationType is inherited from the Resource's configured defaultMethodOptions
    "GET"
)

For aws-cdk-lib.aws_apigatewayv2.CfnRoute:

import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib"

new apigateway.CfnRoute(this, "auth", {
    apiId: api.ref,
    routeKey: "POST /auth",
    authorizationType: "AWS_IAM",
    target: exampleIntegration
})

See

javascript:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

cookie-session module:

let session = cookieSession({
  secure: false,// Sensitive
});  // Sensitive

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    secure: false // Sensitive
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  secure: false // Sensitive
}); // Sensitive

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { secure: false }}); // Sensitive

Compliant Solution

cookie-session module:

let session = cookieSession({
  secure: true,// Compliant
});  // Compliant

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    secure: true // Compliant
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  secure: true // Compliant
}); // Compliant

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { secure: true }}); // Compliant

See

javascript:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

nodejs http built-in module:

const http = require('http');
const srv = http.createServer((req, res) => {
  res.writeHead(200, { 'Access-Control-Allow-Origin': '*' }); // Sensitive
  res.end('ok');
});
srv.listen(3000);

Express.js framework with cors middleware:

const cors = require('cors');

let app1 = express();
app1.use(cors()); // Sensitive: by default origin is set to *

let corsOptions = {
  origin: '*' // Sensitive
};

let app2 = express();
app2.use(cors(corsOptions));

User-controlled origin:

function (req, res) {
  const origin = req.header('Origin');
  res.setHeader('Access-Control-Allow-Origin', origin); // Sensitive
};

Compliant Solution

nodejs http built-in module:

const http = require('http');
const srv = http.createServer((req, res) => {
  res.writeHead(200, { 'Access-Control-Allow-Origin': 'trustedwebsite.com' }); // Compliant
  res.end('ok');
});
srv.listen(3000);

Express.js framework with cors middleware:

const cors = require('cors');

let corsOptions = {
  origin: 'trustedwebsite.com' // Compliant
};

let app = express();
app.use(cors(corsOptions));

User-controlled origin validated with an allow-list:

function (req, res) {
  const origin = req.header('Origin');

  if (trustedOrigins.indexOf(origin) >= 0) {
    res.setHeader('Access-Control-Allow-Origin', origin);
  }
};

See

javascript:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

mustache.js template engine:

let Mustache = require("mustache");

Mustache.escape = function(text) {return text;}; // Sensitive

let rendered = Mustache.render(template, { name: inputName });

handlebars.js template engine:

const Handlebars = require('handlebars');

let source = "<p>attack {{name}}</p>";

let template = Handlebars.compile(source, { noEscape: true }); // Sensitive

markdown-it markup language parser:

const markdownIt = require('markdown-it');
let md = markdownIt({
  html: true // Sensitive
});

let result = md.render('# <b>attack</b>');

marked markup language parser:

const marked = require('marked');

marked.setOptions({
  renderer: new marked.Renderer(),
  sanitize: false // Sensitive
});

console.log(marked("# test <b>attack/b>"));

kramed markup language parser:

let kramed = require('kramed');

var options = {
  renderer: new kramed.Renderer({
    sanitize: false // Sensitive
  })
};

Compliant Solution

mustache.js template engine:

let Mustache = require("mustache");

let rendered = Mustache.render(template, { name: inputName }); // Compliant autoescaping is on by default

handlebars.js template engine:

const Handlebars = require('handlebars');

let source = "<p>attack {{name}}</p>";
let data = { "name": "<b>Alan</b>" };

let template = Handlebars.compile(source); // Compliant by default noEscape is set to false

markdown-it markup language parser:

let md = require('markdown-it')(); // Compliant by default html is set to false

let result = md.render('# <b>attack</b>');

marked markup language parser:

const marked = require('marked');

marked.setOptions({
  renderer: new marked.Renderer()
}); // Compliant by default sanitize is set to true

console.log(marked("# test <b>attack/b>"));

kramed markup language parser:

let kramed = require('kramed');

let options = {
  renderer: new kramed.Renderer({
    sanitize: true // Compliant
  })
};

console.log(kramed('Attack [xss?](javascript:alert("xss")).', options));

See

javascript:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws-sqs.Queue

import { Queue } from 'aws-cdk-lib/aws-sqs';

new Queue(this, 'example'); // Sensitive

For aws-cdk-lib.aws-sqs.CfnQueue

import { CfnQueue } from 'aws-cdk-lib/aws-sqs';

new CfnQueue(this, 'example'); // Sensitive

Compliant Solution

For aws-cdk-lib.aws-sqs.Queue

import { Queue } from 'aws-cdk-lib/aws-sqs';

new Queue(this, 'example', {
    encryption: QueueEncryption.KMS_MANAGED
});

For aws-cdk-lib.aws-sqs.CfnQueue

import { CfnQueue } from 'aws-cdk-lib/aws-sqs';

const encryptionKey = new Key(this, 'example', {
    enableKeyRotation: true,
});

new CfnQueue(this, 'example', {
    kmsMasterKeyId: encryptionKey.keyId
});

See

javascript:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_efs.FileSystem

import { FileSystem } from 'aws-cdk-lib/aws-efs';

new FileSystem(this, 'unencrypted-explicit', {
    vpc: new Vpc(this, 'VPC'),
    encrypted: false // Sensitive
});

For aws_cdk.aws_efs.CfnFileSystem

import { CfnFileSystem } from 'aws-cdk-lib/aws-efs';

new CfnFileSystem(this, 'unencrypted-implicit-cfn', {
}); // Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_efs.FileSystem

import { FileSystem } from 'aws-cdk-lib/aws-efs';

new FileSystem(this, 'encrypted-explicit', {
    vpc: new Vpc(this, 'VPC'),
    encrypted: true
});

For aws_cdk.aws_efs.CfnFileSystem

import { CfnFileSystem } from 'aws-cdk-lib/aws-efs';

new CfnFileSystem(this, 'encrypted-explicit-cfn', {
    encrypted: true
});

See

secrets:S6700

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A RapidAPI key is a unique identifier that allows you to access and use APIs provided by RapidAPI. This key is used to track your API usage, manage your subscriptions, and ensure that you have the necessary permissions to access the APIs you are using. One RapidAPI key can be used to authenticate against a set of multiple other third-party services, depending on the key entitlement.

If a RapidAPI key leaks to an unintended audience, it can have several potential consequences. Especially, attackers may use the leaked key to access and utilize the APIs associated with that key without permission. This can result in unauthorized usage of API services, potentially leading to misuse, abuse, or excessive consumption of resources.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

RapidAPI services include an audit trail feature that can be used to audit malicious use of the compromised key.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("rapidapi_key", "6f1bbe24b9mshcbb5030202794a4p18f7d0jsndd55ab0f981d") // Noncompliant

Compliant solution

props.set("rapidapi_key", System.getenv("rapidapi_key"))

Resources

Standards

Documentation

secrets:S6701

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Telegram bot keys are used to authenticate and authorize a bot to interact with the Telegram Bot API. These keys are essentially access tokens that allow the bot to send and receive messages, manage groups and channels, and perform other actions on behalf of the bot.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

Phishing and spam

An attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("api_token", "7299363101:AAWJlilLyeMaKgTTrrfsyrtxDqqI-cdI-TF") // Noncompliant

Compliant solution

props.set("api_token", System.getenv("API_TOKEN"))

Resources

Standards

secrets:S6702

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A SonarQube token is a unique key that serves as an authentication mechanism for accessing the SonarQube platform’s APIs. It is used to securely authenticate and authorize external tools or services to interact with SonarQube.

Tokens are typically generated for specific users or applications and can be configured with different levels of access permissions. By using a token, external tools or services can perform actions such as analyzing code, retrieving analysis results, creating projects, or managing quality profiles within SonarQube.

If a SonarQube token leaks to an unintended audience, it can pose a security risk to the SonarQube instance and the associated projects. Attackers may use the leaked token to gain unauthorized access to the SonarQube instance. They can potentially view sensitive information, modify project settings, or perform other dangerous actions.

Additionally, attackers with access to a token can modify code analysis results. This can lead to false positives or negatives in the analysis, compromising the accuracy and reliability of the platform.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

The SonarQube audit log can be downloaded from the product web interface and can be used to audit the malicious use of the compromised key. This feature is available starting with SonarQube Enterprise Edition.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("sonar_secret", "squ_b4556a16fa2d28519d2451a911d2e073024010bc") // Noncompliant

Compliant solution

props.set("sonar_secret", System.getenv("SONAR_SECRET"))

Resources

Standards

Documentation

secrets:S6703

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Passwords are often used to authenticate users against database engines. They are associated with user accounts that are granted specific permissions over the database and its hosted data.

If a database password leaks to an unintended audience, it can have serious consequences for the security of your database instance, the data stored within it, and the applications that rely on it.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Security downgrade

Applications relying on a database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise.

For example, if the database instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Never hard-code secrets, not even the default values

It is important that you do not hard-code secrets, even default values.

First, hard-coded default secrets are often short and can be easily compromised even by attackers who do not have access to the code base.

Second, hard-coded default secrets can cause problems if they need to be changed or replaced.

And most importantly, there is always the possibility to accidentally set default secrets for production services, which can lead to security vulnerabilities and make production insecure by default.

To minimize these risks, it is recommended to apply the above strategies, even for the default settings.

Code examples

Noncompliant code example

public static string ConnectionString = "server=database-server;uid=user;pwd=P@ssw0rd;database=ProductionData";

Compliant solution

public static string ConnectionString = String.format(
    "server=database-server;uid=user;pwd=%s;database=ProductionData",
    System.getenv("DB_PASSWORD")
)

Resources

Standards

secrets:S6704

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Riot API keys are used to access the Riot Games API, which provides developers with programmatic access to various data and services related to Riot Games' products, such as League of Legends. These API keys are used to authenticate and authorize requests made to the API, allowing developers to retrieve game data, player statistics, match history, and other related information.

If a Riot API key is leaked to an unintended audience, it can have significant consequences. One of the main risks is unauthorized access. The unintended audience may exploit the leaked API key to gain entry to the Riot Games API. This can result in the unauthorized retrieval of sensitive data and misuse of services provided by the API. It poses a serious security threat as it allows individuals to access information that they should not have access to, potentially compromising the privacy and integrity of the data.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("api_key", "RGAPI-924549e3-31a9-406e-9e92-25ed41206dce") // Noncompliant

Compliant solution

props.set("api_key", System.getenv("API_KEY"))

Resources

Standards

secrets:S6705

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

An OpenWeather API key is a unique identifier that allows you to access the OpenWeatherMap API. The OpenWeatherMap API provides weather data and forecasts for various locations worldwide.

If an OpenWeather API key leaks to an unintended audience, it can have several security consequences. Attackers may use the leaked API key to access the OpenWeatherMap API and consume the weather data without proper authorization. This can lead to excessive usage, potentially exceeding the API rate limits, or violating the terms of service.

Moreover, depending on the pricing model of the corresponding OpenWeather account, unauthorized usage of the leaked API key can result in unexpected charges or increased costs. Attackers may consume a large amount of data or make excessive requests, leading to additional expenses for the API key owner.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

url = "http://api.openweathermap.org/data/2.5/weather?units=imperial&appid=ae73acab47d0fc4b71b634d943b00518&q="

Compliant solution

import os
token = os.environ["OW_TOKEN"]

uri = "http://api.openweathermap.org/data/2.5/weather?units=imperial&appid={token}&q="

Resources

Standards

Documentation

OpenWeather Documentation - API keys

secrets:S6706

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A cryptographic private key is a piece of sensitive information that is used in asymmetric cryptography. They are used in conjunction with public keys to secure communications and authenticate digital signatures.

Private keys can be used to achieve two main cryptographic operations, encryption or digital signature. Those operations are the basis of multiple higher-level security mechanisms such as:

  • User authentication
  • Servers authentication, for example in the X509 trust model
  • E-mail encryption

Disclosing a cryptographic private key to an unintended audience can have severe security consequences. The exact impact will vary depending on the role of the key and the assets it protects.

For example, if the key is used in conjunction with an X509 certificate to authenticate a web server as part of TLS communications, attackers will be able to impersonate that server. This leads to Man-In-The-Middle-Attacks that would affect both the confidentiality and integrity of the communications from clients to that server.

If the key was used as part of e-mail protocols, attackers might be able to send e-mails on behalf of the key owner or decrypt previously encrypted emails. This might lead to sensitive information disclosure and reputation loss.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

In most cases, if the key is used as part of a larger trust model (X509, PGP, etc), it is necessary to issue and publish a revocation certificate. Doing so will ensure that all people and assets that rely on this key for security operations are aware of its compromise and stop trusting it.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

private_key = "-----BEGIN EC PRIVATE KEY-----" \
    "MF8CAQEEGEfVxjrMPigNhGP6DqH6DPeUZPbaoaCCXaAKBggqhkjOPQMBAaE0AzIA" \
    "BCIxho34upZyXDi/AUy/TBisGeh4yKJN7pit9Z+nKs4QajVy97X8W9JdySlbWeRt" \
    "2w==" \
    "-----END EC PRIVATE KEY-----"

Compliant solution

with open("/path/to/private.key","r") as key_file:
    private_key = key_file.read()

Resources

Standards

secrets:S6708

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

The Discord webhook URL grants access to a channel in your server, represented by a bot. A plethora of permissions can be specified in the Bot Permissions pane.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

Phishing and spam

An attacker can use this webhook to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("discord_webhook_url", "https://discord.com/api/webhooks/1143503308481384529/SBkGFYyl6njbyg_DJwhP2x5s4XAzd8Ll5CZQ7HG4xfDRJhOTAIlb0UiPL4ykOZQNIHpd") // Noncompliant

Compliant solution

props.set("discord_webhook_url", System.getenv("DISCORD_WEBHOOK_URL"))

Resources

Standards

secrets:S6755

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

If an attacker gains access to a DigitalOcean personal access token or OAuth token, they might be able to compromise your DigitalOcean environment. This includes control over Droplets and any applications that are running, as well as databases and other assets that are managed by the account.

What is the potential impact?

If an attacker manages to gain access to the DigitalOcean environment, there exist several ways that they could seriously harm your organization. Any data that is stored in the environment could be leaked, but the environment itself could even be tampered with.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Infrastructure takeover

By obtaining a leaked secret, an attacker can gain control over your organization’s DigitalOcean infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining.

This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions.

Furthermore, corporate DigitalOcean infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data and to cause more damage to the overall infrastructure.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

require 'droplet_kit'

token = 'dop_v1_1adc4095c3c676ff1c31789a1a86480195a5b3d955010c94fcfa554b34640e1e'  # Noncompliant
client = DropletKit::Client.new(access_token: token)

Compliant solution

require 'droplet_kit'

token = ENV['DIGITALOCEAN_TOKEN']
client = DropletKit::Client.new(access_token: token)

Resources

Documentation

DigitalOcean Documentation - How to Create a Personal Access Token

Standards

secrets:S6758

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

If an attacker gains access to an NPM access token, they might be able to gain access to any private package linked to this token.

What is the potential impact?

The exact impact of the compromise of an NPM access token varies depending on the permissions granted to this token. It can range from loss of sensitive data and source code to severe supply chain attacks.

Compromise of sensitive source code

The affected service is used to store private packages and repositories. If a token is leaked, it can be used by unauthorized individuals to gain access to your sensitive code, proprietary libraries, and other confidential resources. This can lead to intellectual property theft, unauthorized modifications, or even sabotage of your software.

If these private packages contain other secrets, it might even lead to further breaches in the organization’s services.

Supply chain attacks

If the leaked secret gives an attacker the ability to publish code to private packages or repositories under the name of the organization, then there may exist grave consequences beyond the compromise of source code. The attacker may inject malware, backdoors, or other harmful code into these private repositories.

This can cause further security breaches inside the organization, but will also affect clients if the malicious code gets added to any products. Distributing code that (unintentionally) contains backdoors or malware can lead to widespread security vulnerabilities, reputational damage, and potential legal liabilities.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

steps:
    - run: |
        npm install
    - env:
        NPM_TOKEN: npm_tCEMceczuiTXKQaBjGIaAezYQ63PqI972ANG

Compliant solution

steps:
    - run: |
        npm install
    - env:
        NPM_TOKEN: ${{ secrets.NPM_TOKEN }}

Going the extra mile

Reducing the permission scope per secret

By reducing the permission scope, the token is granted only the minimum set of permissions required to perform its intended tasks. This follows the principle of least privilege, which states that a user or token should have only the necessary privileges to carry out its specific functions. By adhering to this principle, the potential attack surface is minimized, reducing the risk of unauthorized access or misuse of sensitive resources.

Additionally, if a token is compromised, the reduced permissions scope limits the potential damage that can be done. With fewer permissions, the attacker’s ability to access or modify critical resources is restricted, reducing the impact of the compromise.

Resources

Documentation

npm Docs - Revoking access tokens

Standards

secrets:S6782

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

When an attacker gains access to a Docker Hub personal access token and the username of the account, they can gain access to all container images accessible to that account.

What is the potential impact?

In case of a leaked token, an attacker can read out all private images and are also able to store new malicious images in the registry. This can have multiple severe consequences.

Compromise of sensitive source code

Docker Hub is often used to store private container images. If a personal access token is leaked, it can be used by unauthorized individuals to gain access to these images. Not only does this allow a malicious person to access and use internal projects, but it can also enable them to leak sensitive source code, proprietary binaries, and other confidential resources belonging to these projects. This can lead to intellectual property theft, unauthorized modifications, or even sabotage of your software.

If these private images contain other secrets, it might even lead to further breaches in the organization’s services.

Supply chain attacks

If the leaked secret gives an attacker the ability to publish code to private packages or repositories under the name of the organization, then there may exist grave consequences beyond the compromise of source code. The attacker may inject malware, backdoors, or other harmful code into these private repositories.

This can cause further security breaches inside the organization, but will also affect clients if the malicious code gets added to any products. Distributing code that (unintentionally) contains backdoors or malware can lead to widespread security vulnerabilities, reputational damage, and potential legal liabilities.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

steps:
  - name: Login to DockerHub
    uses: docker/login-action@v2
    with:
      username: mobythewhale
      password: dckr_pat_cq7wQZcv9xZkVlxMhDTcTV00CDo

Compliant solution

steps:
  - name: Login to DockerHub
    uses: docker/login-action@v2
    with:
      username: ${{ secrets.dockerUsername }}
      password: ${{ secrets.dockerAccessToken }}

Resources

Documentation

Docker docs - Create and manage access tokens

Standards

secrets:S6783

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Disclosure of blockchain data

The leaked key can be used to query APIs of blockchain services and access sensitive information stored in the service metadata. This may include user identities and other sensitive data.
Such disclosure compromises user privacy and confidentiality.

Breach of trust in non-repudiation and disruption of the audit trail

When such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity.

All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications.

Financial loss

Since this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or the account is tampered with.
This can range from indirect losses to direct unauthorized transfers of funds that can lead to bankruptcy or impoverishment of individuals.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("infura_api_key", "https://mainnet.infura.io/v3/f6fc4aa25abb16e901876269d01f2ec5") // Noncompliant

Compliant solution

props.set("infura_api_key", System.getenv("INFURA_API_KEY"))

Resources

Standards

secrets:S6910

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Postmark server tokens are used to authenticate requests to the Postmark API. When a request is made to the Postmark API, the server token is included in the header of the request. This process enables Postmark to confirm that the request originates from a trusted source and should be processed accordingly.

These tokens are sensitive because they provide full access to all features and data on a specific server in Postmark.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this token to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Account termination

Unauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure.

The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("X-Postmark-Server-Token", "89d36b44-4c54-4623-91d9-b61f29b702f8") // Noncompliant

Compliant solution

props.set("X-Postmark-Server-Token", System.getenv("POSTMARK_SERVER_TOKEN"))

Resources

Standards

secrets:S6686

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If a Clarifai API key leaks to an unintended audience, it could potentially lead to unauthorized access to the Clarifai account and its associated data. This could result in the compromise of sensitive data or financial loss.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

from clarifai_grpc.grpc.api.status import status_code_pb2

metadata = (('authorization','Key d819f799b90bc8dbaffd83661782dbb7'),)

Compliant solution

import os
from clarifai_grpc.grpc.api.status import status_code_pb2

metadata = (('authorization',os.environ["CLARIFAI_API_KEY"]),)

Resources

Standards

secrets:S6689

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

GitHub tokens are used for authentication and authorization purposes when interacting with the GitHub API. They serve as a way to identify and authenticate users or applications that are making requests to the GitHub API.

The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Application’s security downgrade

A downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component.

For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("token", "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") // Noncompliant

Compliant solution

props.set("token", System.getenv("TOKEN"))

Resources

Standards

Documentation

GitHub documentation - Managing your personal access tokens

secrets:S6710

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A FCM API key leak is particularly severe if the affected key has administrative privileges: The range of topics to which an attacker can subscribe and send messages is much larger than with normal privileges.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this {secret_type} to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Chaining of vulnerabilities

Triggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise.

Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("fcm_key", "cfUDlZL9YBQ:APA91bJxU9oMf3RbiyqnmUO60KU_JLawjf2yrTfSs3_ZAp3dxZS0J88G5P5AoKWoviAdUK5i-2SB7iHcb4Wd38EMsZXBAAb6GZMaSOeKfaI0DuLxAFTOgGNKRSmj2R9gIQyzpjoThmqe") // Noncompliant

Compliant solution

props.set("fcm_key", System.getenv("FCM_KEY"))

Resources

Standards

secrets:S6713

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Slack Incoming Webhook URLs have write-only access to a channel: They can only post messages.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this webhook to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("slack_webhook_url", "https://hooks.slack.com/services/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") // Noncompliant

Compliant solution

props.set("slack_webhook_url", System.getenv("SLACK_WEBHOOK_URL"))

Resources

Standards

secrets:S6717

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Slack Workflow webhook URLs have different effects depending on their permissions: They can be used only to write Slack posts or to trigger other workflows.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this webhook to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Chaining of vulnerabilities

Triggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise.

Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("slack_webhook_url", "https://hooks.slack.com/workflows/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") // Noncompliant

Compliant solution

props.set("slack_webhook_url", System.getenv("SLACK_WEBHOOK_URL"))

Resources

Standards

secrets:S6718

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Stripe endpoint secrets allow webhooks to verify that requests to a user-owned webhook really originated from Stripe. This data can be used to transmit thousands of different types of sensitive events.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Breach of trust in non-repudiation and disruption of the audit trail

When such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity.

All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications.

Financial loss

Since this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or the account is tampered with.
This can range from indirect losses to direct unauthorized transfers of funds that can lead to bankruptcy or impoverishment of individuals.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("stripe_endpoint_secret", "whsec_3cAgzYnf0seUtVzSAP08cH9nDICqwI1T") // Noncompliant

Compliant solution

props.set("stripe_endpoint_secret", System.getenv("STRIPE_ENDPOINT_SECRET"))

Resources

Standards

secrets:S6719

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Disclosure of blockchain data

The leaked key can be used to query APIs of blockchain services and access sensitive information stored in the service metadata. This may include user identities and other sensitive data.
Such disclosure compromises user privacy and confidentiality.

Breach of trust in non-repudiation and disruption of the audit trail

When such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity.

All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications.

Financial loss

Since this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or the account is tampered with.
This can range from indirect losses to direct unauthorized transfers of funds that can lead to bankruptcy or impoverishment of individuals.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("alchemy_eth_api_key", "https://eth-mainnet.alchemyapi.io/v2/sAwFYc32ctGA_VSdesa72bheDxfGWRWl") // Noncompliant

Compliant solution

props.set("alchemy_eth_api_key", System.getenv("ALCHEMY_ETH_API_KEY"))

Resources

Standards

secrets:S6722

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

PlanetScale Database passwords are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Application’s security downgrade

A downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component.

For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("planetscale_password", "pscale_pw_hatgoG_EprhgnblWotaJGbeOeFE7q9BwW0_g5ML486D") // Noncompliant

Compliant solution

props.set("planetscale_password", System.getenv("PLANETSCALE_PASSWORD"))

Resources

Standards

secrets:S6723

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Mailgun API keys provide complete control over the Mailgun account and allow sending bulk emails.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

Phishing and spam

An attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Account termination

Unauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure.

The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("mailgun_key", "key-9392bf4edd483c111748f422750442fe") // Noncompliant

Compliant solution

props.set("mailgun_key", System.getenv("MAILGUN_KEY"))

Resources

Standards

secrets:S6751

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

The exact consequences of a PyPI API token compromise can vary depending on the scope of the affected token. Depending on this factor, the attacker might get access to the full account the token is bound to or only to a project belonging to that user.

In any case, such a compromise can lead to source code leaks, data leaks and even serious supply chain attacks. In general, a reputational loss is also a common threat.

Compromise of sensitive source code

The affected service is used to store private packages and repositories. If a token is leaked, it can be used by unauthorized individuals to gain access to your sensitive code, proprietary libraries, and other confidential resources. This can lead to intellectual property theft, unauthorized modifications, or even sabotage of your software.

If these private packages contain other secrets, it might even lead to further breaches in the organization’s services.

Supply chain attacks

If the leaked secret gives an attacker the ability to publish code to private packages or repositories under the name of the organization, then there may exist grave consequences beyond the compromise of source code. The attacker may inject malware, backdoors, or other harmful code into these private repositories.

This can cause further security breaches inside the organization, but will also affect clients if the malicious code gets added to any products. Distributing code that (unintentionally) contains backdoors or malware can lead to widespread security vulnerabilities, reputational damage, and potential legal liabilities.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

For PyPI, keyring is a recommended solution to securely store secrets. Further explanation is given in the example below.

Code examples

Noncompliant code example

PyPI API tokens can be used to authenticate with PyPI by setting the token as a password in .pypirc.

[pypi]
username = __token__
password = pypi-YBf3ZAIKOMPwNZ1VaQ0RAtjww5lI1az1CMLEOWgDQN56EPADfzRmgsENVcmIUh2mSBwYlTtyNKGmVlLm2MZD2aJOTWmD2EO5PMyWjvUY3Ii2CjsidALCNCNmvX8N8gcijBliFN2ciBCLgQdi2YYfGjA1kz19z1UBKg

Compliant solution

Instead, Python’s keyring package can be used to securely authenticate with PyPI. Once keyring is installed using pip, it should be used for authentication automatically. It is also possible to configure it manually using the following:

pip config set --global global.keyring-provider subprocess

Going the extra mile

Reducing the permission scope per secret

By reducing the permission scope, the token is granted only the minimum set of permissions required to perform its intended tasks. This follows the principle of least privilege, which states that a user or token should have only the necessary privileges to carry out its specific functions. By adhering to this principle, the potential attack surface is minimized, reducing the risk of unauthorized access or misuse of sensitive resources.

Additionally, if a token is compromised, the reduced permissions scope limits the potential damage that can be done. With fewer permissions, the attacker’s ability to access or modify critical resources is restricted, reducing the impact of the compromise.

Resources

Documentation

Standards

secrets:S6752

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

Attackers with access to an Artifactory API key will be able to use this API with all the permissions the corresponding user has been granted with.

What is the potential impact?

The consequences vary depending on the compromised account entitlement but can range from proprietary information leaks to severe supply chain attacks.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

In the case of Artifactory repositories, if they contain private code or software, attackers will be able to steal those. They could use this software for their own use, to look for further exploitable vulnerability, or disclose it publicly, with or without asking for a ransom.

Supply chain attacks

If the leaked secret gives an attacker the ability to publish code to private packages or repositories under the name of the organization, then there may exist grave consequences beyond the compromise of source code. The attacker may inject malware, backdoors, or other harmful code into these private repositories.

This can cause further security breaches inside the organization, but will also affect clients if the malicious code gets added to any products. Distributing code that (unintentionally) contains backdoors or malware can lead to widespread security vulnerabilities, reputational damage, and potential legal liabilities.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("artifactory_token", "AKCp8vLnDPZeVA29WylUNdaT54Pg2E9rx8gJWfbPCw2Wsb0UCAEmimIPFscGbJPYEUhXVBCRQ") // Noncompliant

Compliant solution

props.set("artifactory_token", System.getenv("ARTIFACTORY_TOKEN"))

Resources

Standards

secrets:S6753

Zuplo is an API management platform built for developers. It handles authentification and access to your API and provides additional functionalities such as rate limiting the number of requests to your backend. In order for your backend to validate that a request has been processed by Zuplo, it relies on an API key generated in Zuplo Developer Portal. If this key is compromised, attackers will be able to bypass Zuplo and access your API without authentication and authorization.

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

The exact impact of a Zuplo API key being leaked varies greatly depending on the type of services the software is used to implement. In general, consequences ranging from a denial of service to application compromise can be expected.

Chaining of vulnerabilities

Triggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise.

Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("zapi_key", "zpka_213d294a9a5a44619cd6a02e55a20417_5f43e4d0") // Noncompliant

Compliant solution

props.set("zapi_key", System.getenv("ZAPI_KEY"))

Resources

Documentation

Standards

secrets:S6762

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

If an attacker gains access to a Grafana personal access token or Granafa Cloud token, they might be able to compromise the Grafana environment linked to this token. By doing so, it might be possible for business-critical data to be leaked by the attacker.

What is the potential impact?

Depending on the permissions given to the secret, the impact might range from the compromise of the data of some dashboards to a full takeover of the Grafana environment.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Application takeover

With control over the Grafana application, the attacker can modify dashboards, alter data sources, or inject malicious code. This can result in the manipulation of displayed data, misleading visualizations, or even the introduction of backdoors for further exploitation.

The attacker may even attempt to escalate their privileges within the Grafana environment. By gaining administrative access or higher-level permissions, they can perform more significant actions, such as modifying access controls, adding or deleting users, or changing system configurations.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

import requests

token = 'glsa_geygSnIfuK5vBG0KgaflRCQfIb8mzaM7_b0999d91'  # Noncompliant
response = requests.get('https://grafana.example.org/api/dashboards/home', headers={
    'Authorization': f'Bearer {token}',
    'Content-Type': 'application/json'
})

Compliant solution

import requests

token = os.getenv('GRAFANA_SERVICE_ACCOUNT_TOKEN')
response = requests.get('https://grafana.example.org/api/dashboards/home', headers={
    'Authorization': f'Bearer {token}',
    'Content-Type': 'application/json'
})

Resources

Documentation

Grafana Documentation - Service Accounts

Standards

secrets:S6768

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

If an attacker gains access to a Typeform personal access token, they might be able to compromise the data that is accessible to the linked Typeform account. By doing so, it might be possible for customer data to be leaked by the attacker.

What is the potential impact?

If an attacker gains access to forms and the data linked to the forms, your organization may be impacted in several ways.

Data compromise

Typeform often is used to store private information that users have shared through their forms. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive personal information. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

Phishing attacks

An attacker can use the Typeform access token to lure them into links to a malicious domain controlled by the attacker.

They can use the data stored in the forms to create attacks that look legitimate to the victims. In some cases, they might even edit existing forms to lead users to a malicious domain directly.

Once a user has been phished on a legitimate-seeming third-party website, the attacker can trick users into submitting sensitive information, such as login credentials or financial details. This can lead to identity theft, financial fraud, or unauthorized access to other systems.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

import requests

token = 'tfp_DEueEgDipkmx52r7rgU5EC7VC5K2MzzsR61ELEkqmh3Y_3mJqwKJ2vtfX5N'  # Noncompliant
response = requests.get('https://api.typeform.com/forms', headers={
    'Authorization': f'Bearer {token}',
    'Content-Type': 'application/json'
})

Compliant solution

import requests

token = os.getenv('TYPEFORM_PERSONAL_ACCESS_TOKEN')
response = requests.get('https://api.typeform.com/forms', headers={
    'Authorization': f'Bearer {token}',
    'Content-Type': 'application/json'
})

Resources

Documentation

Typeform Developers - Regenerate your personal access token

Standards

secrets:S6769

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

If an attacker gains access to a Shopify app token or a Shopify Partners token, they might be able to compromise the Shopify environment linked to this token. As this environment typically contains both important financial data and the personal information of clients, a breach by a malicious entity could have a serious impact on the organization.

What is the potential impact?

Shopify contains both important information about customers, as well as financial information in general. If an attacker manages to get access to either of those through a leaked secret, they could severely impact the business in multiple ways.

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

Furthermore, the personal identifiable information contained by the Shopify platform could be used for phishing. Not sufficiently protecting the sensitive information of clients, such as addresses, email addresses and even financial information, can directly hurt these clients and will also hurt the reputation of the organization.

Disclosure of financial data

When an attacker gains access to an organization’s financial information, it can have severe consequences for the organization. One of the primary concerns is the potential leakage of sensitive financial data. This information may include bank account details, credit card information, or confidential financial reports. If this data falls into the wrong hands, it can be used for malicious purposes such as identity theft, unauthorized access to financial accounts, or even blackmail.

The disclosure of financial information can also lead to a loss of confidence and damage the organization’s reputation with its stakeholders. Customers, partners, and investors place trust in organizations to protect their financial data. In case of a breach, customers may be hesitant to continue doing business with this company, leading to a loss of revenue and market share. Similarly, partners and investors may reconsider their long-term collaborations or investments due to concerns about the organization’s overall security posture.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

import requests

token = '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'  # Noncompliant
response = requests.get('https://test-shop.myshopify.com/admin/api/2021-07/shop.json', headers={
    'X-Shopify-Access-Token': token,
    'Content-Type': 'application/json'
})

Compliant solution

import requests

token = os.getenv('SHOPIFY_ACCESS_TOKEN')
response = requests.get('https://test-shop.myshopify.com/admin/api/2021-07/shop.json', headers={
    'X-Shopify-Access-Token': token,
    'Content-Type': 'application/json'
})

Resources

Documentation

Shopify.dev docs - Access tokens for custom apps in the Shopify admin

Standards

secrets:S6337

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this {secret_type} to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("ibm-key", "fDKU7e_u_EnQgWgDVO4b_ubGqVTa5IYwWEey7lMfEB_1") // Noncompliant

Compliant solution

props.set("ibm-key", System.getenv("IBM_KEY"))

Resources

Standards

secrets:S6338

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Azure Storage Account Keys are used to authenticate and authorize access to Azure Storage resources, such as blobs, queues, tables, and files. These keys are used to authenticate requests made against the storage account.

If an Azure Storage Account Key is leaked to an unintended audience, it can pose a significant security risk to your Azure Storage account.

An attacker with access to your storage account key can potentially access and modify all the data stored in your storage account. They can also create new resources, delete existing ones, and perform other actions that can compromise the integrity and confidentiality of your data.

In addition, an attacker with access to your storage account key can also incur charges on your account by creating and using resources, which can result in unexpected billing charges.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

using Azure.Storage.Blobs;
using Azure.Storage;

class Example
{
    static void Main(string[] args)
    {
        string account = "accountname";
        string accountKey = "4dVw+l0W8My+FwuZ08dWXn+gHxcmBtS7esLAQSrm6/Om3jeyUKKGMkfAh38kWZlItThQYsg31v23A0w/uVP4pg=="; // Noncompliant
        StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(account, accountKey);

        BlobServiceClient blobServiceClient = new BlobServiceClient(
            new Uri($"https://{account}.blob.core.windows.net"),
            sharedKeyCredential);
    }
}

Compliant solution

Using environment variables:

using System;
using Azure.Storage.Blobs;
using Azure.Storage;

class Example
{
    static void Main(string[] args)
    {
        string account = Environment.GetEnvironmentVariable("ACCOUNT_NAME");
        string accountKey = Environment.GetEnvironmentVariable("ACCOUNT_KEY");
        StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(account, accountKey);

        BlobServiceClient blobServiceClient = new BlobServiceClient(
            new Uri($"https://{account}.blob.core.windows.net"),
            sharedKeyCredential);
    }
}

Using a passwordless approach, thanks to DefaultAzureCredential:

using System;
using Azure.Storage.Blobs;
using Azure.Identity;

class Example
{
    static void Main(string[] args)
    {
        string account = Environment.GetEnvironmentVariable("ACCOUNT_NAME");
        var blobServiceClient = new BlobServiceClient(
                new Uri($"https://{account}.blob.core.windows.net"),
                new DefaultAzureCredential());
    }
}

Resources

Standards

Documentation

secrets:S6684

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Azure Subscription Keys are used to authenticate and authorize access to Azure resources and services. These keys are unique identifiers that are associated with an Azure subscription and are used to control access to resources such as virtual machines, storage accounts, and databases. Subscription keys are typically used in API requests to Azure services, and they help ensure that only authorized users and applications can access and modify resources within an Azure subscription.

If an Azure Subscription Key is leaked to an unintended audience, it can pose a significant security risk to the Azure subscription and the resources it contains. An attacker who gains access to a subscription key can use it to authenticate and access resources within the subscription, potentially causing data breaches, data loss, or other malicious activities.

Depending on the level of access granted by the subscription key, an attacker could potentially create, modify, or delete resources within the subscription, or even take control of the entire subscription. This could result in significant financial losses, reputational damage, and legal liabilities for the organization that owns the subscription.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Microsoft Azure provides an activity log that can be used to audit the access to the API.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("subscription_key", "efbb1a98f026d061464af685cd16dcd3") // Noncompliant

Compliant solution

props.set("subscription_key", System.getenv("SUBSCRIPTION_KEY"))

Resources

Standards

Documentation

secrets:S6687

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If a Django secret key leaks to an unintended audience, it can have serious security implications for the corresponding application. The secret key is used to sign cookies and other sensitive data so that an attacker could potentially use it to perform malicious actions.

For example, an attacker could use the secret key to create their own cookies that appear to be legitimate, allowing them to bypass authentication and gain access to sensitive data or functionality.

In the worst-case scenario, an attacker could be able to execute arbitrary code on the application and take over its hosting server.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

In Django, changing the secret value is sufficient to invalidate any data that it protected. It is important to not add the revoked secret to the SECRET_KEY_FALLBACKS list. Doing so would not prevent previously protected data from being used.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

SECRET_KEY = 'r&lvybzry1*k+qq)=x-!=0yd5l5#1gxzk!82@ru25*ntos3_9^'

Compliant solution

import os

SECRET_KEY = os.environ["SECRET_KEY"]

Resources

Standards

Documentation

secrets:S6688

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A Facebook application secret key is a unique authentication token assigned to a Facebook application. It is used to authenticate and authorize the application to access Facebook’s APIs and services, such as:

  • retrieving user data
  • posting on behalf of users
  • accessing various Facebook features

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

Phishing and spam

An attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("facebook_secret", "a569a8eee3802560e1416edbc4ee119d") // Noncompliant

Compliant solution

props.set("facebook_secret", System.getenv("FACEBOOK_SECRET"))

Resources

Standards

Documentation

secrets:S6697

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Passwords in MySQL are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data.

If a MySQL password leaks to an unintended audience, it can have serious consequences for the security of your database, the data stored within it and the applications that rely on it.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Security downgrade

Applications relying on a MySQL database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise.

For example, if the MySQL instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

General-purpose MySQL log files contain information about user authentication. They can be used to audit malicious use of password-leak-affected accounts.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Never hard-code secrets, not even the default values

It is important that you do not hard-code secrets, even default values.

First, hard-coded default secrets are often short and can be easily compromised even by attackers who do not have access to the code base.

Second, hard-coded default secrets can cause problems if they need to be changed or replaced.

And most importantly, there is always the possibility to accidentally set default secrets for production services, which can lead to security vulnerabilities and make production insecure by default.

To minimize these risks, it is recommended to apply the above strategies, even for the default settings.

Code examples

Noncompliant code example

uri = "mysql://foouser:foopass@example.com/testdb"

Compliant solution

import os
user = os.environ["MYSQL_USER"]
password = os.environ["MYSQL_PASSWORD"]

uri = f"mysql://{user}:{password}@example.com/testdb"

Resources

Standards

secrets:S6720

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Zapier webhook URLs have different effects depending on their permissions: They can be used only to write simple messages in instant messaging apps or trigger other advanced workflows.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this {secret_type} to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Chaining of vulnerabilities

Triggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise.

Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("zapier_webhook_url", "https://hooks.zapier.com/hooks/catch/3017724/t0q8ed/") // Noncompliant

Compliant solution

props.set("zapier_webhook_url", System.getenv("ZAPIER_WEBHOOK_URL"))

Resources

Standards

secrets:S6721

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Teams Workflow webhook URLs have different effects depending on their permissions: They can be used only to write Teams posts or to trigger other workflows.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this {secret_type} to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Chaining of vulnerabilities

Triggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise.

Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("teams_webhook_url", "https://sonarcompany.webhook.office.com/webhookb2/52feb105-fe74-52b9-8e90-5d165916fe22@61c6aa5a3-6531-4e28-9c0b-33ba1a8aa1ff/IncomingWebhook/f7fb2308e5f14431ace5b7cd0e670e42/4563618c-b03b-4e80-b093-28bb4ff11de8") // Noncompliant

Compliant solution

props.set("teams_webhook_url", System.getenv("TEAMS_WEBHOOK_URL"))

Resources

Standards

secrets:S6733

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Chaining of vulnerabilities

Triggering arbitrary workflows can lead to problems ranging from a denial of service to worse, depending on how the webhook’s data is handled. If the webhook performs a specific action that is affected by a vulnerability, the webhook acts as a remote attack vector on the enterprise.

Components affected by this webhook could, for example, experience unexpected failures or excessive resource consumption. If it is a single point of failure (SPOF), this leak is critical.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Account termination

Unauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure.

The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("airtable_key", "key6yLyCekATg67Ts") // Noncompliant

Compliant solution

props.set("airtable_key", System.getenv("AIRTABLE_KEY"))

Resources

Standards

secrets:S6736

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

AMQP URLs containing credentials allow publishing and consuming messages from the queue.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the credentials.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("amqp-url", "amqps://admin:password@example.com:8080/example") // Noncompliant

Compliant solution

props.set("amqp-url", System.getenv("amqps://"+System.getenv("AMQP_CREDENTIALS")+"@example.com:8080/example"))

Resources

Standards

secrets:S6760

Yandex Cloud is a complete platform that provides services such as virtual machines, cloud storage, API gateways, and private networks, to name a few. In Yandex Cloud, users are authenticated using secret keys and tokens. If one of these secret is compromised, attackers will be able to perform any action on behalf of the account or user associated with this secret.

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

If an attacker gains access to a Yandex token or key, they might be able to compromise your Yandex Cloud environment. This includes control over any applications or services that are running, as well as data that are managed by the account.

What is the potential impact?

If an attacker manages to gain access to the Yandex Cloud environment, there exist several ways that they could seriously harm your organization. Any data that is stored in the environment could be leaked, and the environment itself could even be tampered with.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Infrastructure takeover

By obtaining a leaked secret, an attacker can gain control over your organization’s Yandex Cloud infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining.

This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions.

Furthermore, corporate Yandex Cloud infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data and to cause more damage to the overall infrastructure.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

import { Session, cloudApi, serviceClients } from '@yandex-cloud/nodejs-sdk';

const { resourcemanager: { cloud_service: { ListCloudsRequest } } } = cloudApi;

const session = new Session({ iamToken: 't1.7euelSbPyceKx87JqpuRl1qZiY-Ryi3rnpWaksrKaZqUppnLncmDnpeajZvl8_dZNAFl-e8ENXMH_t3z9xljfmT57wQ1cwf-.-LErty1vRh4S__VEp-aDnM5huB5MEfm_Iu1u2IzNgyrn0emiWDYA6rSQXDvzjE0O3HBbUlqoDeCmXYYInzZ6Cg' }); // Noncompliant
const cloudService = session.client(serviceClients.CloudServiceClient);

const response = await cloudService.list(ListCloudsRequest.fromPartial({
    pageSize: 100,
}));

Compliant solution

import { Session, cloudApi, serviceClients } from '@yandex-cloud/nodejs-sdk';

const { resourcemanager: { cloud_service: { ListCloudsRequest } } } = cloudApi;

const session = new Session({ iamToken: process.env.YANDEX_TOKEN });
const cloudService = session.client(serviceClients.CloudServiceClient);

const response = await cloudService.list(ListCloudsRequest.fromPartial({
    pageSize: 100,
}));

Resources

Documentation

Yandex OAuth token

Yandex IAM token

Yandex API key

Standards

secrets:S6764

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

If attackers gain access to your WakaTime OAuth token or secret, they can potentially use it to make unauthorized requests to the WakaTime API on your behalf.

What is the potential impact?

Attackers exploiting leaked WakaTime OAuth tokens or secrets can potentially access sensitive information, modify data, or perform actions on behalf of the user without their consent. The exact capabilities of the attackers will depend on the authorizations the corresponding application has been granted.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

from rauth import OAuth2Service

service = OAuth2Service(
    client_id='d130uKF73fueZSCM9tUodIFN',
    client_secret='waka_sec_ez0kI3tQlYVvYSJOAjoI5n3PpyG69HQl91TZKFjSdb0X0XXgY7dahXiPpAhYL2kNxqDBzHuHNuzCPr5d', # Noncompliant
    name='wakatime',
    authorize_url='https://wakatime.com/oauth/authorize',
    access_token_url='https://wakatime.com/oauth/token',
    base_url='https://wakatime.com/api/v1/')

Compliant solution

import os
from rauth import OAuth2Service

service = OAuth2Service(
    client_id=os.environ['WAKA_CLIENT_ID'],
    client_secret=os.environ['WAKA_CLIENT_SECRET'],
    name='wakatime',
    authorize_url='https://wakatime.com/oauth/authorize',
    access_token_url='https://wakatime.com/oauth/token',
    base_url='https://wakatime.com/api/v1/')

Resources

Documentation

WakaTime API Documentation - WakaTime API Authenticationb

Standards

secrets:S6765

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

If an attacker gains access to a Figma personal access token, they might be able to compromise the data that is accessible to the linked Figma account. By doing so, it might be possible for confidential data to be leaked by the attacker.

What is the potential impact?

Below are some real-world scenarios that may occur when a malicious entity manages to retrieve a leaked Figma personal access token.

Compromise of business-critical data

An attacker can use a personal access token to gain unauthorized access to your company’s Figma projects and designs. This can include confidential client data, proprietary design assets, or any other intellectual property stored in Figma. With unauthorized access, the attacker can download and share this sensitive data, potentially leading to data breaches, intellectual property theft, or other forms of unauthorized disclosure.

Unauthorized actions in Figma environment

With a leaked Figma personal access token, an attacker can perform various actions on behalf of your company within the Figma workspace. This can include creating, modifying, or deleting projects, files, or team members. By impersonating authorized users, the attacker can manipulate your company’s design assets or disrupt the design workflow. This can result in unauthorized changes and data loss.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

import requests

token = 'figd_OLDXZWOP4fxW4c9ER0xzxRda96M-f0eFwZpFQjHJ'  # Noncompliant
response = requests.get('https://api.figma.com/v1/me', headers={
    'X-FIGMA-TOKEN': token,
    'Content-Type': 'application/json'
})

Compliant solution

import requests

token = os.getenv('FIGMA_PERSONAL_ACCESS_TOKEN')
response = requests.get('https://api.figma.com/v1/me', headers={
    'X-FIGMA-TOKEN': token,
    'Content-Type': 'application/json'
})

Resources

Documentation

Figma Developers - Access tokens

Standards

secrets:S6777

Shippo is a multi-carrier shipping platform that helps businesses streamline their shipping processes. It provides a unified API and dashboard that allows businesses to connect with multiple shipping carriers. Shippo API tokens are used for authentication and authorization purposes when making API requests.

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If a Shippo API token is leaked, it can have several consequences:

Financial Loss

If the leaked API token is used to generate shipping labels or make shipping-related transactions, it can result in financial loss. Unauthorized individuals may exploit the token to generate fraudulent labels or make unauthorized shipments, leading to additional shipping costs or potential chargebacks. ==== Data Breach

If the leaked API token is associated with a user account that has access to sensitive customer or business data, it can result in a data breach. This can lead to the exposure of personal information, shipping addresses, payment details, or other confidential data, potentially causing harm to your customers and your business reputation.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

Shippo.setApiKey('shippo_live_258d9b4c41a8cb88ca7fb4b12c65083f658435ac'); // Noncompliant

HashMap<String, Object> addressMap = new HashMap<String, Object>();
addressMap.put("name", "Mr. Hippo");
addressMap.put("company", "Shippo");
addressMap.put("street1", "215 Clayton St.");
addressMap.put("city", "San Francisco");
addressMap.put("state", "CA");
addressMap.put("zip", "94117");
addressMap.put("country", "US");
addressMap.put("phone", "+1 555 341 9393");
addressMap.put("email", "support@goshipppo.com");

Address createAddress = Address.create(addressMap);

Compliant solution

Shippo.setApiKey(System.getenv("SHIPPO_API_TOKEN"));

HashMap<String, Object> addressMap = new HashMap<String, Object>();
addressMap.put("name", "Mr. Hippo");
addressMap.put("company", "Shippo");
addressMap.put("street1", "215 Clayton St.");
addressMap.put("city", "San Francisco");
addressMap.put("state", "CA");
addressMap.put("zip", "94117");
addressMap.put("country", "US");
addressMap.put("phone", "+1 555 341 9393");
addressMap.put("email", "support@goshipppo.com");

Address createAddress = Address.create(addressMap);

Resources

Standards

secrets:S6334

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Google API keys are used to authenticate applications that consume Google Cloud APIs.

API keys are not strictly secret as they are often embedded into client-side code or mobile applications that consume Google Cloud APIs. Still, they should be secured.

Financial loss

An unrestricted Google API key being disclosed in a public source code could be used by malicious actors to consume Google APIs on behalf of your application.
This will have a financial impact as your organization will be billed for the data consumed by the malicious actor.

Denial of service

If your account has enabled quota to cap the API consumption of your application, this quota can be exceeded, leaving your application unable to request the Google APIs it requires to function properly.

How to fix it

Depending on the sensitivity of the key use, only administrators should have access to the Google API keys used by your application.

For client-facing keys

If the key must be sent to clients for the service to run properly, then it does not need to be revoked or added to a Vault, ignore the following sections.
However, it is important to apply Google’s guidelines to secure the API keys.

These best practices will help mitigate abuse of this key.

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("google-api-key", "zAIJf4Six4MjGwxvkarrf1LPUaCdyNSjzsyIoRI") // Noncompliant

Compliant solution

props.set("google-api-key", System.getenv("GOOGLE_API_KEY"))

Resources

Standards

secrets:S6335

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

Here is an example of a service account key file. In general it is in the form of a json file as demonstrated in the GCP docs.

{
    "type": "service_account",
    "project_id": "example-project",
    "private_key_id": "2772b8e6f42dc67369b98f0b91694f7805b28844",
    "private_key": "-----BEGIN PRIVATE KEY-----\nKBww9jggAgBEHBCBAASIMDsoCBAuAQINAgFAGSXQTkiAE0cEIkoQghJAqGavB/r3\n2W6raHa1Qrfj6pii5U2Ok53SxCyK3TxYc3Bfxq8orZeYC9LQ/I3tz7w4/BnT71AD\nfP1i8SWHsRMIicSuVFcRoYMA+A1eNSmdrujdBNWgedfuSyHbPnNY7s8BBUIoBN7I\n8gJG5DUUKAZfZDB2c/n7Yu0=\n-----END PRIVATE KEY-----\n",
    "client_email": "example@example.iam.gserviceaccount.example.com",
    "client_id": "492539091821492546176",
    "auth_uri": "https://accounts.google.com/o/oauth2/auth",
    "token_uri": "https://oauth2.googleapis.com/token",
    "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
    "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/example%40example.iam.gserviceaccount.example.com",
    "universe_domain": "googleapis.com"
}

Compliant solution

Always avoid committing service account key files to public systems. Use any *ignore file possible, such as .gitignore, .dockerignore and equivalents for any other system accessing your local codebase.

Resources

Standards

secrets:S6336

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

This rule flags instances of:

  • Alibaba Cloud AccessKey ID
  • Alibaba Cloud AccessKey secret

What is the potential impact?

AccessKeys are long term credentials designed to authenticate and authorize requests to Alibaba Cloud.

If your application interacts with Alibaba Cloud then it requires AccessKeys to access all the resources it needs to function properly. Resources that can be accessed depend on the permissions granted to the Alibaba Cloud account.
These credentials may authenticate to the account root user who has unrestricted access to all resources in your Alibaba Cloud account, including billing information.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

How to fix it

Only administrators should have access to the AccessKeys used by your application.

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("alibaba-key", "LTAI5tBcc9SecYAo") // Noncompliant

Compliant solution

props.set("alibaba-key", System.getenv("ALIBABA_KEY"))

Resources

Standards

secrets:S6696

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

SendGrid keys are used for authentication and authorization when using the SendGrid email delivery service.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

Phishing and spam

An attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Account termination

Unauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure.

The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("sg_key", "SG.Wjo5QoWqTmrFtMUf8m2T.CIY0Z24e5sJawIymiK_ZKC_7I15yDP0ur1yt0qtkR9Go") // Noncompliant

Compliant solution

props.set("sg_key", System.getenv("SG_KEY"))

Resources

Standards

Documentation

secrets:S6698

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Passwords in PostgreSQL are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data.

If a PostgreSQL password leaks to an unintended audience, it can have serious consequences for the security of your database, the data stored within it, and the applications that rely on it.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Security downgrade

Applications relying on a PostgreSQL database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise.

For example, if the PostgreSQL instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

By default, no connection information is logged by PostgreSQL server. The log_connections parameter must be set to true in the server configuration for this to happen.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Never hard-code secrets, not even the default values

It is important that you do not hard-code secrets, even default values.

First, hard-coded default secrets are often short and can be easily compromised even by attackers who do not have access to the code base.

Second, hard-coded default secrets can cause problems if they need to be changed or replaced.

And most importantly, there is always the possibility to accidentally set default secrets for production services, which can lead to security vulnerabilities and make production insecure by default.

To minimize these risks, it is recommended to apply the above strategies, even for the default settings.

Code examples

Noncompliant code example

uri = "postgres://foouser:foopass@example.com/testdb"

Compliant solution

import os

user = os.environ["PG_USER"]
password = os.environ["PG_PASSWORD"]
uri = f"postgres://{user}:{password}@example.com/testdb"

Resources

Standards

Documentation

secrets:S6699

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

The Spotify API secret is a confidential key used for authentication and authorization purposes when accessing the Spotify API.

The Spotify API grants applications access to Spotify’s services and, by extension, user data. Should this secret fall into the wrong hands, two immediate concerns arise: unauthorized access to user data and data manipulation.

When unauthorized entities obtain the API secret, they have potential access to users' personal Spotify information. This includes the details of their playlists, saved tracks, and listening history. Such exposure might not only breach personal boundaries but also infringe upon privacy standards set by platforms and regulators.

In addition to simply gaining access, there is the risk of data manipulation. If malicious individuals obtain the secret, they could tamper with user content on Spotify. This includes modifying playlists, deleting beloved tracks, or even adding unsolicited ones. Such actions not only disrupt the user experience but also violate the trust that users have in both Spotify and third-party applications connected to it.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("spotify_secret", "f3fbd32510154334aaf0394aca3ac4c3") // Noncompliant

Compliant solution

props.set("spotify_secret", System.getenv("SPOTIFY_SECRET"))

Resources

Standards

secrets:S6731

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Slack bot tokens have multiple types of access to a channel: They can post messages, read usernames and users emails. These links have plethora of possible capabilities.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this token to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("slack_bot_token", "xoxb-592666205443-2542034435697-FM7vdsq184d0G5vBNiOq8MSF8t7") // Noncompliant

Compliant solution

props.set("slack_bot_token", System.getenv("SLACK_BOT_TOKEN"))

Resources

Standards

secrets:S6732

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Breach of trust in non-repudiation and disruption of the audit trail

When such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity.

All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications.

Financial loss

Since this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or the account is tampered with.
This can range from indirect losses to direct unauthorized transfers of funds that can lead to bankruptcy or impoverishment of individuals.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("stripe_key", "sk_live_kiSSAXe2IyGNvprHode7efRT") // Noncompliant

Compliant solution

props.set("stripe_key", System.getenv("STRIPE_KEY"))

Resources

Standards

secrets:S6739

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the credentials.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("redis-url", "rediss://admin:password@example.com:8080/example") // Noncompliant

Compliant solution

props.set("redis-url", System.getenv("REDIS_URL"))

Resources

Standards

secrets:S6773

HashiCorp Vault is a popular open-source tool used for securely storing and accessing sensitive data such as passwords, API keys, certificates, and encryption keys. It provides a centralized solution for managing secrets and helps organizations enforce security best practices.

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If a HashiCorp Vault token is compromised, it can have serious consequences for the security of the system and the sensitive data stored within the Vault. Here are some potential consequences:

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Breach of trust in non-repudiation and disruption of the audit trail

When such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity.

All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications.

Application’s security downgrade

A downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component.

For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

import hvac

client = hvac.Client(url='https://vault.example.com', token='hvb.AAAAAQJyBEVF-vTWUrg0hcoIPuvKjjNxXXZ5MfsYVg2gJ0fGZpVi0IGTFfh4TqsoQIWaocNRXD1qzGXvhIHWJBM_rWU9YJY8sXOYVy_s1JAHasXJwGmZ_fBLJfSG6aCwQkCGwtAhYw') # Noncompliant

secret = client.secrets.kv.v2.read_secret_version(path='secret/myapp')
data = secret['data']
username = data.get('username')
password = data.get('password')

Compliant solution

import hvac

client = hvac.Client(url='https://vault.example.com', token=os.environ.get('VAULT_TOKEN'))

secret = client.secrets.kv.v2.read_secret_version(path='secret/myapp')
data = secret['data']
username = data.get('username')
password = data.get('password')

Resources

Documentation

Hashicorp API Documentation - Tokens

Hashicorp API Tutorial - Tokens

Hashicorp API Tutorial - Batch tokens

Standards

secrets:S6290

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

This rule detects the following leaks:

  • AWS Secret Access Keys
  • AWS Access IDs
  • AWS Session Tokens

What is the potential impact?

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Phishing and spam

An attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("aws-secret-access-key", "kHeUAwnSUizTWpSbyGAz4f+As5LshPIjvtpswqGb") // Noncompliant

Compliant solution

props.set("aws-secret-access-key", System.getenv("AWS_SECRET_ACCESS_KEY"))

Resources

Standards

secrets:S6292

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If your application interacts with Amazon MWS then it requires credentials to access all the resources it needs to function properly.
The credentials authenticate to a seller account which can have access to resources like products, orders, price or shipment information.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Financial loss

Since this secret is used to process transaction-related operations, financial loss may also occur if transaction-related objects are corrupted or the account is tampered with.
This can range from indirect losses to direct unauthorized transfers of funds that can lead to bankruptcy or impoverishment of individuals.

Phishing and spam

An attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

Account termination

Unauthorized access to mailing service API keys can also result in resource abuse. Attackers can exploit the API keys to send a large volume of spam emails or perform other resource-intensive operations, causing a significant strain on the mailing service provider’s infrastructure.

The service provider, being vigilant about such activities, may flag your account and take action against it. This could lead to the suspension or termination of the compromised account, thus causing significant inconvenience and potential loss of communication with your customers or partners.

How to fix it

Only administrators should have access to the MWS credentials used by your application.

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("mws-key", "amzn.mws.3b8be74a-5f63-5770-5bad-19bd40c0ac65") // Noncompliant

Compliant solution

props.set("mws-key", System.getenv("MWS_KEY"))

Resources

Standards

secrets:S6690

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

GitLab tokens are used for authentication and authorization purposes. They are essentially access credentials that allow users or applications to interact with the GitLab API.

With a GitLab token, you can perform various operations such as creating, reading, updating, and deleting resources like repositories, issues, merge requests, and more. Tokens can also be scoped to limit the permissions and actions that can be performed.

A leaked GitLab token can have significant consequences for the security and integrity of the associated account and resources. It exposes the account to unauthorized access, potentially leading to data breaches and malicious actions. The unintended audience can exploit the leaked token to gain unauthorized entry into the GitLab account, allowing them to view, modify, or delete repositories, issues, and other resources. This unauthorized access can result in the exposure of sensitive data, such as proprietary code, customer information, or confidential documents, leading to potential data breaches.

Moreover, the unintended audience can perform malicious actions within the account, introducing vulnerabilities, injecting malicious code, or tampering with settings. This can compromise the security of the account and the integrity of the software development process.

Additionally, a leaked token can enable the unintended audience to take control of the GitLab account, potentially changing passwords, modifying settings, and adding or removing collaborators. This account takeover can disrupt development and collaboration workflows, causing reputational damage and operational disruptions.

Furthermore, the impact of a leaked token extends beyond the immediate account compromise. It can have regulatory and compliance implications, requiring organizations to report the breach, notify affected parties, and potentially face legal and financial consequences.

In general, the compromise of a GitLab token would lead to consequences referred to as supply chain attacks that can affect more than one’s own organization.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("token", "glpat-zcs1FfaxGnHfvzd7ExHz") // Noncompliant

Compliant solution

props.set("token", System.getenv("TOKEN"))

Resources

Standards

secrets:S6691

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A Google client OAuth secret is a confidential string that is used to authenticate and authorize applications when they interact with Google APIs. It is a part of the OAuth 2.0 protocol, which allows applications to access user data on their behalf.

The client secret is used in the OAuth flow to verify the identity of the application and ensure that only authorized applications can access user data. It is typically used in combination with a client ID, which identifies the application itself.

If a Google client OAuth secret leaks to an unintended audience, it can have serious security implications. Attackers who obtain the client secret can use it to impersonate the application and gain unauthorized access to user data. They can potentially access sensitive information, modify data, or perform actions on behalf of the user without their consent.

The exact capabilities of the attackers will depend on the authorizations the corresponding application has been granted.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Google Cloud console provides a Logs Explorer feature that can be used to audit recent access to a cloud infrastructure.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("client_secret", "TgxYWFmND-1NTYwNTgzMDM3N") // Noncompliant

Compliant solution

props.set("client_secret", System.getenv("CLIENT_SECRET"))

Resources

Standards

Documentation

secrets:S6692

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

A reCaptcha secret key is a unique token that is used to verify the authenticity of reCaptcha requests made from an application to the reCaptcha service. It is a key component in ensuring CAPTCHAs challenges issued by the application are properly solved and verified.

If a reCaptcha secret key leaks to an unintended audience, attackers with access to it will be able to forge CAPTCHA responses without solving them. It will allow them to bypass the CAPTCHA challenge verification.

This can lead to an influx of spam submissions, automated attacks, or unauthorized access attempts depending on the feature the CAPTCHA mechanism is intended to protect.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("recaptcha_secret", "6LcaQa4mAAAAAFvhmzAd2hErGBSt4FC-BPzm4cNS") // Noncompliant

Compliant solution

props.set("recaptcha_secret", System.getenv("RECAPTCHA_SECRET"))

Resources

Standards

Documentation

secrets:S6693

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

SSH private keys are used for authentication and secure communication in SSH (Secure Shell) protocols. They are a form of asymmetric cryptography, where a pair of keys is generated: a private key and a corresponding public key. SSH keys provide a secure and efficient way to authenticate and establish secure connections between clients and servers. They are widely used for remote login, file transfer, and secure remote administration.

When an SSH private key is leaked to an unintended audience, it can have severe consequences for security and confidentiality. One of the primary outcomes is unauthorized access. The unintended audience can exploit the leaked private key to authenticate themselves as the legitimate owner, gaining unauthorized entry to systems, servers, or accounts that accept the key for authentication. This unauthorized access opens the door for various malicious activities, including data breaches, unauthorized modifications, and misuse of sensitive information.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Depending on the information system the key is used to authenticate against, the audit method might change. For example, on Linux systems, the system-wide authentication logs could be used to audit recent connections from an affected account.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

String key = """
    -----BEGIN OPENSSH PRIVATE KEY-----
    b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
    QyNTUxOQAAACDktj2RM1D2wRTQ0H+YZsFqnAuZrqBNEB4PpJ5xm73nWwAAAJgJVPFECVTx
    RAAAAAtzc2gtZWQyNTUxOQAAACDktj2RM1D2wRTQ0H+YZsFqnAuZrqBNEB4PpJ5xm73nWw
    AAAECQ8Nzp6a1ZJgS3SWh2pMxe90W9tZVDZ+MZT35GjCJK2uS2PZEzUPbBFNDQf5hmwWqc
    C5muoE0QHg+knnGbvedbAAAAFGdhZXRhbmZlcnJ5QFBDLUwwMDc3AQ==
    -----END OPENSSH PRIVATE KEY-----""";

Compliant solution

String key = System.getenv("SSH_KEY");

Resources

Standards

secrets:S6694

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

Passwords in MongoDB are used to authenticate users against the database engine. They are associated with user accounts that are granted specific permissions over the database and its hosted data.

If a MongoDB password leaks to an unintended audience, it can have serious consequences for the security of your database, the data stored within it, and the applications that rely on it.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers knowing an authentication secret could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Security downgrade

Applications relying on a MongoDB database instance can suffer a security downgrade if an access password is leaked to attackers. Depending on the purposes the application uses the database for, consequences can range from low-severity issues, like defacement, to complete compromise.

For example, if the MongoDB instance is used as part of the authentication process of an application, attackers with access to the database will likely be able to bypass this security mechanism.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

MongoDB instances maintain a log that includes user authentication events. This one could be used to audit recent malicious connections.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

uri = "mongodb://foouser:foopass@example.com/testdb"

Compliant solution

import os

user = os.environ["MONGO_USER"]
password = os.environ["MONGO_PASSWORD"]
uri = f"mongodb://{user}:{password}@example.com/testdb"

Resources

Standards

Documentation

secrets:S6695

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

WeChat application keys are used for authentication and authorization purposes when integrating third-party applications with the WeChat platform.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the secret.

Compromise of sensitive personal data

This kind of service is often used to exchange information that could include personal information, chat logs, and other private data that users have shared on the platform. This is called Personally Identifiable Information.
The leaked app key could provide a gateway for unauthorized individuals to access and misuse this data, compromising the privacy and safety of the application users.

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If this kind of sensitive personal data gets leaked, companies face legal consequences, penalties, or violations of privacy laws.

Phishing and spam

An attacker can use this secret to spam users or lure them into links to a malicious domain controlled by the attacker.

Spam can cause users to be exposed to the following:

  • Unsolicited, inappropriate content, such as pornographic material
  • Fraudulent attempts to trick users into sending information or money
  • Abusive or hateful statements
  • False advertising or fraudulent claims

Once a user has been phished on a legitimate-seeming third-party website, an attacker can collect the user’s credentials, bypass multi-factor authentication (MFA), and take over the user’s account on the trusted website.

Malware distribution

Due to this vulnerability, malware can be stored and spread, both to users of the service and to other potential targets.
A malware depends on the attacker’s intentions, as the following examples show:

  • Cryptojacking malware, whose goal is to "mine" cryptocurrencies on the affected computers or servers.
  • Spyware that spies out sensitive information from victims.

In the worst case, malware can cause the target systems to be completely compromised and allow attackers to infiltrate the systems.

WeChat exploitation

Furthermore, the leaked app key could enable unauthorized parties to manipulate or disrupt the functionality of the WeChat app. They could tamper with app settings, inject malicious code, or even take control of the app’s user base. Such actions could result in a loss of user trust, service disruptions, and reputational damage for both the app developer and the WeChat platform.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

props.set("secret_key", "40b6b70508b47cbfb4ee39feb617a05a") // Noncompliant

Compliant solution

props.set("secret_key", System.getenv("SECRET_KEY"))

Resources

Standards

secrets:S6771

Postman is an API development and testing platform that allows developers to design, build, and test APIs. Postman tokens are used for authentication and authorization purposes when making API requests.

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If a Postman token is leaked or compromised, it can lead to several security issues and risks. Here are some potential consequences:

Unauthorized access

An attacker who gains access to a leaked token can use it to impersonate the legitimate user or application associated with the token. This can result in unauthorized access to sensitive data or functionality within the API.

Data breaches

If the leaked token provides access to sensitive data, an attacker can use it to retrieve or manipulate that data. This can lead to data breaches that compromise the confidentiality and integrity of the information. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

API abuse

With a leaked token, an attacker can abuse the API by making unauthorized requests, consuming excessive resources, or performing malicious actions. This can disrupt the API’s regular operation, impact performance, or even cause denial-of-service (DoS) attacks.

Privilege escalation

Depending on the permissions and scope associated with the token, an attacker may be able to escalate their privileges within the API. They can gain access to additional resources or perform actions that they are not authorized to do.

Breach of trust in non-repudiation and disruption of the audit trail

When such a secret is compromised, malicious actors might have the possibility to send malicious event objects, causing discrepancies in the audit trail. This can make it difficult to trace and verify the sequence of events, impacting the ability to investigate and identify unauthorized or fraudulent activity.

All in all, this can lead to problems in proving the validity of transactions or actions performed, potentially leading to disputes and legal complications.

Reputation damage

If a token is leaked and used for malicious purposes, it can damage the reputation of the API provider. Users may lose trust in the security of the API, leading to a loss of business and credibility.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

const axios = require('axios');

const apiKey = 'PMAK-6502e63761882f002a69f0cb-6d9bc58cd0cc60ff5547f81cf2ca141bb9'; // Noncompliant
const options = {
    method: 'get',
    url: 'https://api.getpostman.com/me',
    headers: {
        'Content-Type': 'application/json',
        'X-API-Key': apiKey
    }
};

(async() => { await axios(options); })();

Compliant solution

const axios = require('axios');

const apiKey = process.env.POSTMAN_API_KEY;
const options = {
    method: 'get',
    url: 'https://api.getpostman.com/me',
    headers: {
        'Content-Type': 'application/json',
        'X-API-Key': apiKey
    }
};

(async() => { await axios(options); })();

Resources

Documentation

Postman API

Articles & blog posts

How to Get Started with the Postman API

Standards

kotlin:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

val params = "password=xxxx" // Sensitive
val writer = OutputStreamWriter(getOutputStream())
writer.write(params)
writer.flush()
...
val password = "xxxx" // Sensitive
...

Compliant Solution

val params = "password=${retrievePassword()}"
val writer = OutputStreamWriter(getOutputStream())
writer.write(params)
writer.flush()
...
val password = retrievePassword()
...

See

kotlin:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

These clients from Apache commons net libraries are based on unencrypted protocols and are not recommended:

val telnet = TelnetClient(); // Sensitive

val ftpClient = FTPClient(); // Sensitive

val smtpClient = SMTPClient(); // Sensitive

Unencrypted HTTP connections, when using okhttp library for instance, should be avoided:

val spec: ConnectionSpec = ConnectionSpec.Builder(ConnectionSpec.CLEARTEXT) // Sensitive
  .build()

Android WebView can be configured to allow a secure origin to load content from any other origin, even if that origin is insecure (mixed content):

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setMixedContentMode(MIXED_CONTENT_ALWAYS_ALLOW) // Sensitive

Compliant Solution

Use instead these clients from Apache commons net and JSch/ssh library:

JSch jsch = JSch();

if(implicit) {
  // implicit mode is considered deprecated but offer the same security than explicit mode
  val ftpsClient = FTPSClient(true);
}
else {
  val ftpsClient = FTPSClient();
}

if(implicit) {
  // implicit mode is considered deprecated but offer the same security than explicit mode
  val smtpsClient = SMTPSClient(true);
}
else {
  val smtpsClient = SMTPSClient();
  smtpsClient.connect("127.0.0.1", 25);
  if (smtpsClient.execTLS()) {
    // commands
  }
}

Perform HTTP encrypted connections, with okhttp library for instance:

val spec: ConnectionSpec =ConnectionSpec.Builder(ConnectionSpec.MODERN_TLS)
  .build()

The most secure mode for Android WebView is MIXED_CONTENT_NEVER_ALLOW:

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setMixedContentMode(MIXED_CONTENT_NEVER_ALLOW)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

kotlin:S6300

Storing files locally is a common task for mobile applications. Files that are stored unencrypted can be read out and modified by an attacker with physical access to the device. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen.

Ask Yourself Whether

  • The file contains sensitive data that could cause harm when leaked.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to password-encrypt local files that contain sensitive information. The class EncryptedFile can be used to easily encrypt files.

Sensitive Code Example

val targetFile = File(activity.filesDir, "data.txt")
targetFile.writeText(fileContent)  // Sensitive

Compliant Solution

val mainKey = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC)

val encryptedFile = EncryptedFile.Builder(
    File(activity.filesDir, "data.txt"),
    activity,
    mainKey,
    EncryptedFile.FileEncryptionScheme.AES256_GCM_HKDF_4KB
).build()

encryptedFile.openFileOutput().apply {
    write(fileContent)
    flush()
    close()
}

See

kotlin:S6301

When storing local data in a mobile application, it is common to use a database that can be encrypted. When encryption of this database is enabled, the encryption key must be protected properly.

Why is this an issue?

Mobile applications often need to store data (which might be sensitive) locally. For Android, there exist several libraries that simplify this process by offering a feature-rich database system. SQLCipher and Realm are examples of such libraries. These libraries often add support for database encryption, to protect the contents from being read by other apps or by attackers.

When using encryption for such a database, it is important that the encryption key stays secret. If this key is hardcoded in the application, then it should be considered compromised. The key will be known by anyone with access to the application’s binary code or source code. This means that the sensitive encrypted data can be decrypted by anyone having access to the binary of the mobile application.

Furthermore, if the key is hardcoded, it is the same for every user. A compromise of this encryption key implicates every user of the app.

The encryption key is meant to stay secret and should not be hard-coded in the application as it would mean that:

What is the potential impact?

If an attacker is able to find the encryption key for the mobile database, this can potentially have severe consequences.

Theft of sensitive data

If a mobile database is encrypted, it is likely to contain data that is sensitive for the user or the app publisher. For example, it can contain personally identifiable information (PII), financial data, login credentials, or other sensitive user data.

By not protecting the encryption key properly, it becomes very easy for an attacker to recover it and then decrypt the mobile database. At that point, the theft of sensitive data might lead to identity theft, financial fraud, and other forms of malicious activities.

How to fix it in Realm

Code examples

In the example below, a local database is opened using a hardcoded key. To fix this, the key is moved to a secure location instead and retrieved using a getKey() method.

Noncompliant code example

val key = "gb09ym9ydoolp3w886d0tciczj6ve9kszqd65u7d126040gwy86xqimjpuuc788g"
val config = RealmConfiguration.Builder()
    .encryptionKey(key.toByteArray()) // Noncompliant
    .build()
val realm = Realm.getInstance(config)

Compliant solution

val config = RealmConfiguration.Builder()
    .encryptionKey(getKey())
    .build()
val realm = Realm.getInstance(config)

How does this work?

Using Android’s builtin key storage options

The Android Keystore system allows apps to store encryption keys in a container that is protected on a system level. Additionally, it can restrict when and how the keys are used. For example, it allows the app to require user authentication (for example using a fingerprint) before the key is made available. This is the recommended way to store cryptographic keys on Android.

Dynamically retrieving encryption keys remotely

As user devices are less trusted than controlled environments such as the application backend, the latter should be preferred for the storage of encryption keys. This requires that a user’s device has an internet connection, which may not be suitable for every use case.

Going the extra mile

Avoid storing sensitive data on user devices

In general, it is always preferable to store as little sensitive data on user devices as possible.

Of course, some sensitive data always has to be stored on client devices, such as the data required for authentication. In this case, consider whether the application logic can also function with a hash (or otherwise non-reversible form) of that data. For example, if an email address is required for authentication, it might be possible to use and store a hashed version of this address instead.

Resources

Documentation

Standards

kotlin:S6432

When encrypting data using AES-GCM or AES-CCM, it is essential not to reuse the same initialization vector (IV, also called nonce) with a given key. To prevent this, it is recommended to either randomize the IV for each encryption or increment the IV after each encryption.

Why is this an issue?

When encrypting data using a counter (CTR) derived block cipher mode of operation, it is essential not to reuse the same initialization vector (IV) for a given key. An IV that complies with this requirement is called a "nonce" (number used once). Galois/Counter (GCM) and Counter with Cipher Block Chaining-Message Authentication Code (CCM) are both derived from counter mode.

When using AES-GCM or AES-CCM, a given key and IV pair will create a "keystream" that is used to encrypt a plaintext (original content) into a ciphertext (encrypted content.) For any key and IV pair, this keystream is always deterministic. Because of this property, encrypting several plaintexts with one key and IV pair can be catastrophic. If an attacker has access to one plaintext and its associated ciphertext, they are able to decrypt everything that was created using the same pair.

Additionally, IV reuse also drastically decreases the key recovery computational complexity by downgrading it to a simpler polynomial root-finding problem. This means that even without access to a plaintext/ciphertext pair, an attacker may still be able to decrypt all the sensitive data.

What is the potential impact?

If the encryption that is being used is flawed, attackers might be able to exploit it in several ways. They might be able to decrypt existing sensitive data or bypass key protections.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By not using the encryption algorithm correctly, the likelihood that an attacker might be able to recover the original sensitive data drastically increases.

Additional attack surface

Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. If an attacker is able to modify the cleartext of the encrypted message, it might be possible to trigger other vulnerabilities in the code.

How to fix it in Java Cryptography Extension

Code examples

The example uses a hardcoded IV as a nonce, which causes AES-CCM to be insecure. To fix it, a nonce is randomly generated instead.

Noncompliant code example

fun encrypt(key: ByteArray, ptxt: ByteArray) {
    val iv = "7cVgr5cbdCZV".toByteArray()

    val cipher = Cipher.getInstance("AES/GCM/NoPadding")
    val keySpec = SecretKeySpec(key, "AES")
    val gcmSpec = GCMParameterSpec(128, iv)

    cipher.init(Cipher.ENCRYPT_MODE, keySpec, gcmSpec) // Noncompliant
}

Compliant solution

fun encrypt(key: ByteArray, ptxt: ByteArray) {
    val random = SecureRandom()
    val iv = ByteArray(12)
    random.nextBytes(iv)

    val cipher = Cipher.getInstance("AES/GCM/NoPadding")
    val keySpec = SecretKeySpec(key, "AES")
    val gcmSpec = GCMParameterSpec(128, iv)

    cipher.init(Cipher.ENCRYPT_MODE, keySpec, gcmSpec)
}

How does this work?

For AES-GCM and AES-CCM, NIST recommends generating a nonce using either a deterministic approach or using a 'Random Bit Generator (RBG)'.

Generating nonces using random number generation

When using a randomized approach, NIST recommends a nonce of at least 96 bits using a cryptographically secure pseudorandom number generator (CSPRNG.) Such a generator can create output with a sufficiently low probability of the same number being output twice (also called a collision) for a long time. However, after 232 generated numbers for the same key, NIST recommends rotating this key for a new one. After that amount of generated numbers, the probability of a collision is high enough to be considered insecure.

The code example above demonstrates how CSPRNGs can be used to generate nonces.

Be careful to use a random number generator that is sufficiently secure. Default (non-cryptographically secure) RNGs might be more prone to collisions in their output, which is catastrophic for counter-based encryption modes.

Deterministically generating nonces

One method to prevent the same IV from being used multiple times for the same key is to update the IV in a deterministic way after each encryption. The most straightforward deterministic method for this is a counter.

The way this works is simple: for any key, the first IV is the number zero. After this IV is used to encrypt something with a key, it is incremented for that key (and is now equal to 1). Although this requires additional bookkeeping, it should guarantee that for each encryption key, an IV is never repeated.

For a secure implementation, NIST suggests generating these nonces in two parts: a fixed field and an invocation field. The fixed field should be used to identify the device executing the encryption (for example, it could contain a device ID), such that for one key, no two devices can generate the same nonce. The invocation field contains the counter as described above. For a 96-bit nonce, NIST recommends (but does not require) using a 32-bit fixed field and a 64-bit invocation field. Additional details can be found in the NIST Special Publication 800-38D.

Resources

Standards

kotlin:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptographic Extension

Code examples

Noncompliant code example

import java.nio.charset.StandardCharsets
import java.security.InvalidAlgorithmParameterException
import java.security.InvalidKeyException
import java.security.NoSuchAlgorithmException
import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import javax.crypto.spec.GCMParameterSpec
import javax.crypto.spec.SecretKeySpec

fun encrypt(key: String, plainText: String) {

    val randomBytes = "7cVgr5cbdCZVw5WY".toByteArray(StandardCharsets.UTF_8)

    val iv      = GCMParameterSpec(128, randomBytes)
    val keySpec = SecretKeySpec(key.toByteArray(StandardCharsets.UTF_8), "AES")

    try {
        val cipher = Cipher.getInstance("AES/CBC/NoPadding")
        cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv) // Noncompliant

    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: InvalidKeyException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    } catch (e: InvalidAlgorithmParameterException) {
        // ...
    }
}

Compliant solution

In this example, the code explicitly uses a number generator that is considered strong.

import java.nio.charset.StandardCharsets
import java.security.SecureRandom
import java.security.InvalidAlgorithmParameterException
import java.security.InvalidKeyException
import java.security.NoSuchAlgorithmException
import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import javax.crypto.spec.GCMParameterSpec
import javax.crypto.spec.SecretKeySpec

fun encrypt(key: String, plainText: String) {

    val random      = SecureRandom();
    val randomBytes = ByteArray(16);
    random.nextBytes(randomBytes);

    val iv      = GCMParameterSpec(128, randomBytes)
    val keySpec = SecretKeySpec(key.toByteArray(StandardCharsets.UTF_8), "AES")

    try {
        val cipher = Cipher.getInstance("AES/CBC/NoPadding")
        cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv)

    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: InvalidKeyException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    } catch (e: InvalidAlgorithmParameterException) {
        // ...
    }
}

How does this work?

Use unique IVs

To ensure high security, initialization vectors must meet two important criteria:

  • IVs must be unique for each encryption operation.
  • For CBC and CFB modes, a secure FIPS-compliant random number generator should be used to generate unpredictable IVs.

The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

kotlin:S4347

When using SecureRandom, it is important not to use predictable seeds. This class is used to generate cryptographically strong random numbers. Using a predictable seed will make its output predictable as well, which counteracts the use case of SecureRandom.

Why is this an issue?

java.security.SecureRandom is often used to generate random values for cryptographic algorithms. When a random number generator is used for cryptographic purposes, the generated numbers must be as random and unpredictable as possible. When SecureRandom is improperly seeded with a constant or a predictable value, its output will also be predictable.

This can have severe security implications for cryptographic operations that rely on the randomness of the generated numbers. By using a predictable seed, an attacker can potentially guess or deduce the generated numbers, compromising the security of whatever cryptographic algorithm relies on SecureRandom.

What is the potential impact?

It is crucial to understand that the strength of cryptographic algorithms heavily relies on the quality of the random numbers used. By improperly seeding the SecureRandom class, we introduce a significant weakness that can be exploited by attackers.

Insecure cryptographic keys

One of the primary use cases for the SecureRandom class is generating cryptographic keys. If an attacker can predict the seed used to initialize the SecureRandom instance, they may be able to derive the same keys. Depending on the use case, this can lead to multiple severe outcomes, such as:

  • Being able to decrypt sensitive documents, leading to privacy breaches or identity theft.
  • Gaining access to a private key used for signing, allowing an attacker to forge digital signatures and impersonate legitimate entities.
  • Bypassing authentication mechanisms that rely on public-key infrastructure (PKI), which can be abused to gain unauthorized access to systems or networks.

Session hijacking and man-in-the-middle attack

Another scenario where this vulnerability can be exploited is in the generation of session tokens or nonces for secure communication protocols. If an attacker can predict the seed used to generate these tokens, they can impersonate legitimate users or intercept sensitive information.

How to fix it in Java SE

Code examples

The following code uses a cryptographically strong random number generator to generate data that is not cryptographically strong.

Noncompliant code example

import java.security.SecureRandom

val sr = SecureRandom()
sr.setSeed(123456L) // Noncompliant
val v = sr.nextInt()
import java.security.SecureRandom

val sr = SecureRandom("abcdefghijklmnop".toByteArray(charset("us-ascii"))) // Noncompliant
val v = sr.nextInt()

Compliant solution

import java.security.SecureRandom

val sr = SecureRandom()
val v = sr.nextInt()

This solution is available for JDK 1.8 and higher.

import java.security.SecureRandom

val sr = SecureRandom.getInstanceStrong()
val v = sr.nextInt()

How does this work?

When the randomly generated data needs to be cryptographically strong, SecureRandom is the correct class to use. However, its documentation also cites that "any seed material passed to a SecureRandom object must be unpredictable". When no seed is passed by the user to the object, the SecureRandom object chooses an unpredictable seed by default. Therefore, the easiest way to fix the issue is to use the default constructor without any calls to SecureObject.setSeed().

To go the extra mile, SecureObject.getInstanceStrong() returns an instance of SecureObject that is guaranteed to use a strong algorithm for its number generation.

If the randomly generated data is not used for cryptographic purposes and is not business critical, it may be a better choice to use java.util.Random instead. In this case, setting a predictable seed may be acceptable depending on the situation.

Resources

Documentation

Standards

kotlin:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on applications distributed to end users.

Sensitive Code Example

WebView.setWebContentsDebuggingEnabled(true) for Android enables debugging support:

import android.webkit.WebView

WebView.setWebContentsDebuggingEnabled(true) // Sensitive

Compliant Solution

WebView.setWebContentsDebuggingEnabled(false) for Android disables debugging support:

import android.webkit.WebView

WebView.setWebContentsDebuggingEnabled(false)

See

kotlin:S5322

Android applications can receive broadcasts from the system or other applications. Receiving intents is security-sensitive. For example, it has led in the past to the following vulnerabilities:

Receivers can be declared in the manifest or in the code to make them context-specific. If the receiver is declared in the manifest Android will start the application if it is not already running once a matching broadcast is received. The receiver is an entry point into the application.

Other applications can send potentially malicious broadcasts, so it is important to consider broadcasts as untrusted and to limit the applications that can send broadcasts to the receiver.

Permissions can be specified to restrict broadcasts to authorized applications. Restrictions can be enforced by both the sender and receiver of a broadcast. If permissions are specified when registering a broadcast receiver, then only broadcasters who were granted this permission can send a message to the receiver.

This rule raises an issue when a receiver is registered without specifying any broadcast permission.

Ask Yourself Whether

  • The data extracted from intents is not sanitized.
  • Intents broadcast is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See the Android documentation for more information.

Sensitive Code Example

import android.content.BroadcastReceiver
import android.content.Context
import android.content.IntentFilter
import android.os.Build
import android.os.Handler
import androidx.annotation.RequiresApi

class MyIntentReceiver {
    @RequiresApi(api = Build.VERSION_CODES.O)
    fun register(
        context: Context, receiver: BroadcastReceiver?,
        filter: IntentFilter?,
        scheduler: Handler?,
        flags: Int
    ) {
        context.registerReceiver(receiver, filter) // Sensitive
        context.registerReceiver(receiver, filter, flags) // Sensitive

        // Broadcasting intent with "null" for broadcastPermission
        context.registerReceiver(receiver, filter, null, scheduler) // Sensitive
        context.registerReceiver(receiver, filter, null, scheduler, flags) // Sensitive
    }
}

Compliant Solution

import android.content.BroadcastReceiver
import android.content.Context
import android.content.IntentFilter
import android.os.Build
import android.os.Handler
import androidx.annotation.RequiresApi

class MyIntentReceiver {
    @RequiresApi(api = Build.VERSION_CODES.O)
    fun register(
        context: Context, receiver: BroadcastReceiver?,
        filter: IntentFilter?,
        broadcastPermission: String?,
        scheduler: Handler?,
        flags: Int
    ) {
        context.registerReceiver(receiver, filter, broadcastPermission, scheduler)
        context.registerReceiver(receiver, filter, broadcastPermission, scheduler, flags)
    }
}

See

kotlin:S6362

WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered. In the context of a WebView, JavaScript code can exfiltrate local files that might be sensitive or even worse, access exposed functions of the application that can result in more severe vulnerabilities such as code injection. Thus JavaScript support should not be enabled for WebViews unless it is absolutely necessary and the authenticity of the web resources can be guaranteed.

Ask Yourself Whether

  • The WebWiew only renders static web content that does not require JavaScript code to be executed.
  • The WebView contains untrusted data that could cause harm when rendered.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable JavaScript support for WebViews unless it is necessary to execute JavaScript code. Only trusted pages should be rendered.

Sensitive Code Example

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setJavaScriptEnabled(true) // Sensitive

Compliant Solution

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setJavaScriptEnabled(false)

See

kotlin:S6363

WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered.

If malicious JavaScript code in a WebView is executed this can leak the contents of sensitive files when access to local files is enabled.

Ask Yourself Whether

  • No local files have to be accessed by the Webview.
  • The WebView contains untrusted data that could cause harm when rendered.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable access to local files for WebViews unless it is necessary. In the case of a successful attack through a Cross-Site Scripting vulnerability the attackers attack surface decreases drastically if no files can be read out.

Sensitive Code Example

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setAllowContentAccess(true) // Sensitive
webView.getSettings().setAllowFileAccess(true) // Sensitive

Compliant Solution

import android.webkit.WebView

val webView: WebView = findViewById(R.id.webview)
webView.getSettings().setAllowContentAccess(false)
webView.getSettings().setAllowFileAccess(false)

See

kotlin:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

How to fix it in Java SE

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

import javax.crypto.spec.PBEParameterSpec

fun hash() {
    val salt = "salty".toByteArray()
    val cipherSpec = PBEParameterSpec(salt, 10000) // Noncompliant
}

Compliant solution

import java.security.SecureRandom
import javax.crypto.spec.PBEParameterSpec

fun hash() {
    val random = SecureRandom()
    val salt = ByteArray(16)
    random.nextBytes(salt)
    val cipherSpec = PBEParameterSpec(salt, 10000)
}

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 16 bytes (128 bits), as recommended by industry standards.

Here, the compliant code example ensures the salt is random and has a sufficient length by calling the nextBytes method from the SecureRandom class with a salt buffer of 16 bytes. This class implements a cryptographically secure pseudo-random number generator.

Resources

Standards

  • OWASP Top 10:2021 A02:2021 - Cryptographic Failures
  • OWASP - Top 10 2017 - A03:2017 - Sensitive Data Exposure
  • CWE - CWE-759: Use of a One-Way Hash without a Salt
  • CWE - CWE-760: Use of a One-Way Hash with a Predictable Salt
kotlin:S5320

In Android applications, broadcasting intents is security-sensitive. For example, it has led in the past to the following vulnerability:

By default, broadcasted intents are visible to every application, exposing all sensitive information they contain.

This rule raises an issue when an intent is broadcasted without specifying any "receiver permission".

Ask Yourself Whether

  • The intent contains sensitive information.
  • Intent reception is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See Android documentation for more information.

Sensitive Code Example

import android.content.BroadcastReceiver
import android.content.Context
import android.content.Intent
import android.os.Bundle
import android.os.Handler
import android.os.UserHandle

public class MyIntentBroadcast {
    fun broadcast(intent: Intent,
                  context: Context,
                  user: UserHandle,
                  resultReceiver: BroadcastReceiver,
                  scheduler: Handler,
                  initialCode: Int,
                  initialData: String,
                  initialExtras: Bundle,
                  broadcastPermission: String) {
        context.sendBroadcast(intent) // Sensitive
        context.sendBroadcastAsUser(intent, user) // Sensitive

        // Broadcasting intent with "null" for receiverPermission
        context.sendBroadcast(intent, null) // Sensitive
        context.sendBroadcastAsUser(intent, user, null) // Sensitive
        context.sendOrderedBroadcast(intent, null) // Sensitive
        context.sendOrderedBroadcastAsUser(intent, user, null, resultReceiver,
            scheduler, initialCode, initialData, initialExtras) // Sensitive
    }
}

Compliant Solution

import android.content.BroadcastReceiver
import android.content.Context
import android.content.Intent
import android.os.Bundle
import android.os.Handler
import android.os.UserHandle

public class MyIntentBroadcast {
    fun broadcast(intent: Intent,
                  context: Context,
                  user: UserHandle,
                  resultReceiver: BroadcastReceiver,
                  scheduler: Handler,
                  initialCode: Int,
                  initialData: String,
                  initialExtras: Bundle,
                  broadcastPermission: String) {

        context.sendBroadcast(intent, broadcastPermission)
        context.sendBroadcastAsUser(intent, user, broadcastPermission)
        context.sendOrderedBroadcast(intent, broadcastPermission)
        context.sendOrderedBroadcastAsUser(intent, user,broadcastPermission, resultReceiver,
            scheduler, initialCode, initialData, initialExtras)
    }
}

See

kotlin:S5324

Storing data locally is a common task for mobile applications. Such data includes files among other things. One convenient way to store files is to use the external file storage which usually offers a larger amount of disc space compared to internal storage.

Files created on the external storage are globally readable and writable. Therefore, a malicious application having the permissions WRITE_EXTERNAL_STORAGE or READ_EXTERNAL_STORAGE could try to read sensitive information from the files that other applications have stored on the external storage.

External storage can also be removed by the user (e.g when based on SD card) making the files unavailable to the application.

Ask Yourself Whether

Your application uses external storage to:

  • store files that contain sensitive data.
  • store files that are not meant to be shared with other application.
  • store files that are critical for the application to work.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use internal storage whenever possible as the system prevents other apps from accessing this location.
  • Only use external storage if you need to share non-sensitive files with other applications.
  • If your application has to use the external storage to store sensitive data, make sure it encrypts the files using EncryptedFile.
  • Data coming from external storage should always be considered untrusted and should be validated.
  • As some external storage can be removed, make sure to never store files on it that are critical for the usability of your application.

Sensitive Code Example

import android.content.Context

class AccessExternalFiles {

    fun accessFiles(Context context) {
        context.getExternalFilesDir(null) // Sensitive
    }
}

Compliant Solution

import android.content.Context
import android.os.Environment

class AccessExternalFiles {

    fun accessFiles(Context context) {
        context.getFilesDir()
    }
}

See

kotlin:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest modes are CBC (Cipher Block Chaining) and ECB

(Electronic Codebook), as they are either vulnerable to padding oracles or do not provide authentication mechanisms.

And for RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Java Cryptographic Extension

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException

fun main(args: Array<String>) {
    try {
        val aes = Cipher.getInstance("AES/CBC/PKCS5Padding"); // Noncompliant
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

Example with an asymmetric cipher, RSA:

import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException

fun main(args: Array<String>) {
    try {
        val rsa = Cipher.getInstance("RSA/None/NoPadding"); // Noncompliant
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

Compliant solution

For the AES symmetric cipher, use the GCM mode:

import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException

fun main(args: Array<String>) {
    try {
        val aes = Cipher.getInstance("AES/GCM/NoPadding");
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

import javax.crypto.Cipher
import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException

fun main(args: Array<String>) {
    try {
        val rsa = Cipher.getInstance("RSA/ECB/OAEPWITHSHA-256ANDMGF1PADDING");
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: Use Galois/Counter mode (GCM)

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it

is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

kotlin:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection.
  • Security during transmission or on storage devices.
  • Data integrity, general trust, and authentication.

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Java Cryptographic Extension

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException
import javax.crypto.Cipher

fun main(args: Array<String>) {
    try {
        val des = Cipher.getInstance("DES") // Noncompliant
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

Compliant solution

import javax.crypto.NoSuchPaddingException
import java.security.NoSuchAlgorithmException
import javax.crypto.Cipher

fun main(args: Array<String>) {
    try {
        val aes = Cipher.getInstance("AES/GCM/NoPadding")
    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: NoSuchPaddingException) {
        // ...
    }
}

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

kotlin:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

val ip = "192.168.12.42"
val socket = ServerSocket(ip, 6667)

Compliant Solution

val ip = System.getenv("myapplication.ip")
val socket = ServerSocket(ip, 6667)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849
  • Addresses from ::ffff:0:127.0.0.1 to ::ffff:0:127.255.255.255 and from ::ffff:127.0.0.1 to ::ffff:127.255.255.255, which are local IPv4-mapped IPv6 addresses

See

kotlin:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptographic Extension

Code examples

Noncompliant code example

import javax.net.ssl.SSLContext;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        SSLContext.getInstance("TLSv1.1"); // Noncompliant
    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}

Compliant solution

import javax.net.ssl.SSLContext;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        SSLContext.getInstance("TLSv1.2");
    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

kotlin:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Only use random number generators which are recommended by OWASP or any other trusted organization.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

val random = Random() // Noncompliant: Random() is not a secure random number generaotr
val bytes = ByteArray(20)
random.nextBytes(bytes)

Compliant Solution

val random = SecureRandom() // Compliant
val bytes = ByteArray(20)
random.nextBytes(bytes)

See

kotlin:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptographic Extension

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        val keyPairGenerator = KeyPairGenerator.getInstance("RSA");
        keyPairGenerator.initialize(1024); // Noncompliant

    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}

Here is an example of a private key generation with AES:

import java.security.KeyGenerator;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        val keyGenerator = KeyGenerator.getInstance("AES");
        keyGenerator.initialize(64); // Noncompliant

    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the algorithm name:

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidAlgorithmParameterException;
import java.security.spec.ECGenParameterSpec;

fun main(args: Array<String>) {
    try {
        val keyPairGenerator  = KeyPairGenerator.getInstance("EC");
        val ellipticCurveName = new ECGenParameterSpec("secp112r1"); // Noncompliant
        keyPairGenerator.initialize(ellipticCurveName);

    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: InvalidAlgorithmParameterException) {
        // ...
    }
}

Compliant solution

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

fun main(args: Array<String>) {
    try {
        val keyPairGenerator = KeyPairGenerator.getInstance("RSA");
        keyPairGenerator.initialize(2048);

    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        val keyPairGenerator = KeyPairGenerator.getInstance("AES");
        keyPairGenerator.initialize(128);

    } catch (e: NoSuchAlgorithmException) {
        // ...
    }
}
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidAlgorithmParameterException;
import java.security.spec.ECGenParameterSpec;

public static void main(String[] args) {
    try {
        val keyPairGenerator  = KeyPairGenerator.getInstance("EC");
        val ellipticCurveName = new ECGenParameterSpec("secp256r1");
        keyPairGenerator.initialize(ellipticCurveName);

    } catch (e: NoSuchAlgorithmException) {
        // ...
    } catch (e: InvalidAlgorithmParameterException) {
        // ...
    }
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms are mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC algorithms.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

kotlin:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. The role of certificate validation in this process is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When certificate validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Java Cryptographic Extension

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by overriding X509TrustManager with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

val trustAllCerts = arrayOf<TrustManager>(object : X509TrustManager {
  @Throws(CertificateException::class)
  override fun checkClientTrusted(chain: Array<java.security.cert.X509Certificate>, authType: String) {
  } // Noncompliant

  @Throws(CertificateException::class)
  override fun checkServerTrusted(chain: Array<java.security.cert.X509Certificate>, authType: String) {
  } // Noncompliant

  override fun getAcceptedIssuers(): Array<java.security.cert.X509Certificate> {
   return arrayOf()
  }
})

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Here is a sample command to import a certificate to the Java trust store:

keytool -import -alias myserver -file myserver.crt -keystore cacerts

Resources

Standards

kotlin:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in OkHttp

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by overriding javax.net.ssl.HostnameVerifier.verify() with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

import javax.net.ssl.HttpsURLConnection
import javax.net.ssl.SSLSession
import javax.net.ssl.HostnameVerifier
import okhttp3.OkHttpClient
import okhttp3.Request
import okhttp3.Response

fun request() {
    val builder = OkHttpClient.Builder()
    builder.hostnameVerifier(object : HostnameVerifier {
      override fun verify(hostname: String?, session: SSLSession?): Boolean { // Noncompliant
        return true
      }
    })

    OkHttpClient client = builder.build()

    Request request = new Request.Builder()
            .url("https://example.com")
            .build()

    Response response = client.newCall(request).execute()
}

Compliant solution

import javax.net.ssl.HttpsURLConnection
import javax.net.ssl.SSLSession
import javax.net.ssl.HostnameVerifier
import okhttp3.OkHttpClient
import okhttp3.Request
import okhttp3.Response

fun request() {
    val builder = OkHttpClient.Builder()

    OkHttpClient client = builder.build()

    Request request = new Request.Builder()
            .url("https://example.com")
            .build()

    Response response = client.newCall(request).execute()
}

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Here is a sample command to import a certificate to the Java trust store:

keytool -import -alias myserver -file myserver.crt -keystore cacerts

Resources

Standards

kotlin:S6288

Android KeyStore is a secure container for storing key materials, in particular it prevents key materials extraction, i.e. when the application process is compromised, the attacker cannot extract keys but may still be able to use them. It’s possible to enable an Android security feature, user authentication, to restrict usage of keys to only authenticated users. The lock screen has to be unlocked with defined credentials (pattern/PIN/password, biometric).

Ask Yourself Whether

  • The application requires prohibiting the use of keys in case of compromise of the application process.
  • The key material is used in the context of a highly sensitive application like a e-banking mobile app.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable user authentication (by setting setUserAuthenticationRequired to true during key generation) to use keys for a limited duration of time (by setting appropriate values to setUserAuthenticationValidityDurationSeconds), after which the user must re-authenticate.

Sensitive Code Example

Any users can use the key:

val keyGenerator: KeyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore")

var builder: KeyGenParameterSpec = KeyGenParameterSpec.Builder("test_secret_key", KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT) // Noncompliant
   .setBlockModes(KeyProperties.BLOCK_MODE_GCM)
   .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
   .build()

keyGenerator.init(builder)

Compliant Solution

The use of the key is limited to authenticated users (for a duration of time defined to 60 seconds):

val keyGenerator: KeyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore")

var builder: KeyGenParameterSpec = KeyGenParameterSpec.Builder("test_secret_key", KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT)
   .setBlockModes(KeyProperties.BLOCK_MODE_GCM)
   .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
   .setUserAuthenticationRequired(true) // Compliant
   .setUserAuthenticationParameters (60, KeyProperties.AUTH_DEVICE_CREDENTIAL)
   .build()

keyGenerator.init(builder)

See

kotlin:S4790

The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160.

The following APIs are tracked for use of obsolete crypto algorithms:

  • java.security.AlgorithmParameters (JDK)
  • java.security.AlgorithmParameterGenerator (JDK)
  • java.security.MessageDigest (JDK)
  • java.security.KeyFactory (JDK)
  • java.security.KeyPairGenerator (JDK)
  • java.security.Signature (JDK)
  • javax.crypto.Mac (JDK)
  • javax.crypto.KeyGenerator (JDK)
  • org.apache.commons.codec.digest.DigestUtils (Apache Commons Codec)
  • org.springframework.util.DigestUtils
  • com.google.common.hash.Hashing (Guava)
  • org.springframework.security.authentication.encoding.ShaPasswordEncoder (Spring Security 4.2.x)
  • org.springframework.security.authentication.encoding.Md5PasswordEncoder (Spring Security 4.2.x)
  • org.springframework.security.crypto.password.LdapShaPasswordEncoder (Spring Security 5.0.x)
  • org.springframework.security.crypto.password.Md4PasswordEncoder (Spring Security 5.0.x)
  • org.springframework.security.crypto.password.MessageDigestPasswordEncoder (Spring Security 5.0.x)
  • org.springframework.security.crypto.password.NoOpPasswordEncoder (Spring Security 5.0.x)
  • org.springframework.security.crypto.password.StandardPasswordEncoder (Spring Security 5.0.x)

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

val md1: MessageDigest = MessageDigest.getInstance("SHA");  // Sensitive:  SHA is not a standard name, for most security providers it's an alias of SHA-1
val md2: MessageDigest = MessageDigest.getInstance("SHA1");  // Sensitive

Compliant Solution

val md1: MessageDigest = MessageDigest.getInstance("SHA-512"); // Compliant

See

kotlin:S6291

Storing data locally is a common task for mobile applications. Such data includes preferences or authentication tokens for external services, among other things. There are many convenient solutions that allow storing data persistently, for example SQLiteDatabase, SharedPreferences, and Realm. By default these systems store the data unencrypted, thus an attacker with physical access to the device can read them out easily. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to password-encrypt local databases that contain sensitive information. Most systems provide secure alternatives to plain-text storage that should be used. If no secure alternative is available the data can also be encrypted manually before it is stored.

The encryption password should not be hard-coded in the application. There are different approaches how the password can be provided to encrypt and decrypt the database. In the case of EncryptedSharedPreferences the Android Keystore can be used to store the password. Other databases can rely on EncryptedSharedPreferences to store passwords. The password can also be provided dynamically by the user of the application or it can be fetched from a remote server if the other methods are not feasible.

Sensitive Code Example

For SQLiteDatabase:

var db = activity.openOrCreateDatabase("test.db", Context.MODE_PRIVATE, null) // Sensitive

For SharedPreferences:

val pref = activity.getPreferences(Context.MODE_PRIVATE) // Sensitive

For Realm:

val config = RealmConfiguration.Builder().build()
val realm = Realm.getInstance(config) // Sensitive

Compliant Solution

Instead of SQLiteDatabase you can use SQLCipher:

val db = SQLiteDatabase.openOrCreateDatabase("test.db", getKey(), null)

Instead of SharedPreferences you can use EncryptedSharedPreferences:

val masterKeyAlias = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC)
EncryptedSharedPreferences.create(
    "secret",
    masterKeyAlias,
    context,
    EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV,
    EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM
)

For Realm an encryption key can be specified in the config:

val config = RealmConfiguration.Builder()
    .encryptionKey(getKey())
    .build()
val realm = Realm.getInstance(config)

See

kotlin:S6293

Android comes with Android KeyStore, a secure container for storing key materials. It’s possible to define certain keys to be unlocked when users authenticate using biometric credentials. This way, even if the application process is compromised, the attacker cannot access keys, as presence of the authorized user is required.

These keys can be used, to encrypt, sign or create a message authentication code (MAC) as proof that the authentication result has not been tampered with. This protection defeats the scenario where an attacker with physical access to the device would try to hook into the application process and call the onAuthenticationSucceeded method directly. Therefore he would be unable to extract the sensitive data or to perform the critical operations protected by the biometric authentication.

Ask Yourself Whether

The application contains:

  • Cryptographic keys / sensitive information that need to be protected using biometric authentication.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to tie the biometric authentication to a cryptographic operation by using a CryptoObject during authentication.

Sensitive Code Example

A CryptoObjectis not used during authentication:

// ...
val biometricPrompt: BiometricPrompt = BiometricPrompt(activity, executor, callback)
// ...
biometricPrompt.authenticate(promptInfo) // Noncompliant

Compliant Solution

A CryptoObject is used during authentication:

// ...
val biometricPrompt: BiometricPrompt = BiometricPrompt(activity, executor, callback)
// ...
biometricPrompt.authenticate(promptInfo, BiometricPrompt.CryptoObject(cipher)) // Compliant

See

go:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

var (
  ip   = "192.168.12.42"
  port = 3333
)

SocketClient(ip, port)

Compliant Solution

config, err := ReadConfig("properties.ini")

ip := config["ip"]
port := config["ip"]

SocketClient(ip, port)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

go:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

func connect()  {
  user := "root"
  password:= "supersecret" // Sensitive

  url := "login=" + user + "&passwd=" + password
}

Compliant Solution

func connect()  {
  user := getEncryptedUser()
  password:= getEncryptedPass() // Compliant

  url := "login=" + user + "&passwd=" + password
}

See

python:S2115

When accessing a database, an empty password should be avoided as it introduces a weakness.

Why is this an issue?

When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials.

What is the potential impact?

Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains.

Unauthorized Access to Sensitive Data

When a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage.

Compromise of System Integrity

Without a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks.

Unwanted Modifications or Deletions

The absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences.

Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm.

How to fix it in MySQL Connector/Python

Code examples

The following code uses an empty password to connect to a MySQL database.

The vulnerability can be fixed by using a strong password retrieved from an environment variable DB_PASSWORD. This environment variable is set during deployment. It should be strong and different for each database.

Noncompliant code example

from mysql.connector import connection

connection.MySQLConnection(host='localhost', user='sonarsource', password='')  # Noncompliant

Compliant solution

from mysql.connector import connection
import os

db_password = os.getenv('DB_PASSWORD')
connection.MySQLConnection(host='localhost', user='sonarsource', password=db_password)

Pitfalls

Hard-coded passwords

It could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:

  1. Security Vulnerability: Hard-coded passwords can be easily discovered by anyone who has access to the code, such as other developers or attackers. This can lead to unauthorized access to the database and potential data breaches.
  2. Lack of Flexibility: Hard-coded passwords make it difficult to change the password without modifying the code. If the password needs to be updated, it would require recompiling and redeploying the code, which can be time-consuming and error-prone.
  3. Version Control Issues: Storing passwords in code can lead to version control issues. If the code is shared or stored in a version control system, the password will be visible to anyone with access to the repository, which is a security risk.

To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase.

Resources

Standards

python:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in pyca

Code examples

Noncompliant code example

from cryptography.hazmat.primitives.ciphers import (
    Cipher,
    algorithms,
    modes,
)

iv     = "doNotTryThis@Home2023"
cipher = Cipher(algorithms.AES(key), modes.CBC(iv))

cipher.encryptor()  # Noncompliant

Compliant solution

In this example, the code explicitly uses a number generator that is considered strong.

from os import urandom

from cryptography.hazmat.primitives.ciphers import (
    Cipher,
    algorithms,
    modes,
)

iv     = urandom(16)
cipher = Cipher(algorithms.AES(key), modes.CBC(iv))

cipher.encryptor()

How does this work?

Use unique IVs

To ensure high security, initialization vectors must meet two important criteria:

  • IVs must be unique for each encryption operation.
  • For CBC and CFB modes, a secure FIPS-compliant random number generator should be used to generate unpredictable IVs.

The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

python:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

For a Django application, the code is sensitive when,

  • django.middleware.csrf.CsrfViewMiddleware is not used in the Django settings:
MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'django.contrib.sessions.middleware.SessionMiddleware',
    'django.middleware.common.CommonMiddleware',
    'django.contrib.auth.middleware.AuthenticationMiddleware',
    'django.contrib.messages.middleware.MessageMiddleware',
    'django.middleware.clickjacking.XFrameOptionsMiddleware',
] # Sensitive: django.middleware.csrf.CsrfViewMiddleware is missing
  • the CSRF protection is disabled on a view:
@csrf_exempt # Sensitive
def example(request):
    return HttpResponse("default")

For a Flask application, the code is sensitive when,

  • the WTF_CSRF_ENABLED setting is set to false:
app = Flask(__name__)
app.config['WTF_CSRF_ENABLED'] = False # Sensitive
  • the application doesn’t use the CSRFProtect module:
app = Flask(__name__) # Sensitive: CSRFProtect is missing

@app.route('/')
def hello_world():
    return 'Hello, World!'
  • the CSRF protection is disabled on a view:
app = Flask(__name__)
csrf = CSRFProtect()
csrf.init_app(app)

@app.route('/example/', methods=['POST'])
@csrf.exempt # Sensitive
def example():
    return 'example '
  • the CSRF protection is disabled on a form:
class unprotectedForm(FlaskForm):
    class Meta:
        csrf = False # Sensitive

    name = TextField('name')
    submit = SubmitField('submit')

Compliant Solution

For a Django application,

  • it is recommended to protect all the views with django.middleware.csrf.CsrfViewMiddleware:
MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'django.contrib.sessions.middleware.SessionMiddleware',
    'django.middleware.common.CommonMiddleware',
    'django.middleware.csrf.CsrfViewMiddleware', # Compliant
    'django.contrib.auth.middleware.AuthenticationMiddleware',
    'django.contrib.messages.middleware.MessageMiddleware',
    'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
  • and to not disable the CSRF protection on specific views:
def example(request): # Compliant
    return HttpResponse("default")

For a Flask application,

  • the CSRFProtect module should be used (and not disabled further with WTF_CSRF_ENABLED set to false):
app = Flask(__name__)
csrf = CSRFProtect()
csrf.init_app(app) # Compliant
  • and it is recommended to not disable the CSRF protection on specific views or forms:
@app.route('/example/', methods=['POST']) # Compliant
def example():
    return 'example '

class unprotectedForm(FlaskForm):
    class Meta:
        csrf = True # Compliant

    name = TextField('name')
    submit = SubmitField('submit')

See

python:S5852

Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars).

This rule determines the runtime complexity of a regular expression and informs you of the complexity if it is not linear.

Ask Yourself Whether

  • The input is user-controlled.
  • The input size is not restricted to a small number of characters.
  • There is no timeout in place to limit the regex evaluation time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression.

In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen. Note that when performing a full match (e.g. using re.fullmatch), the end of the regex counts as a pattern that can fail because it will only succeed when the end of the string is reached.

  • If you have a non-possessive repetition r* or r*?, such that the regex r could produce different possible matches (of possibly different lengths) on the same input, the worst case matching time can be exponential. This can be the case if r contains optional parts, alternations or additional repetitions (but not if the repetition is written in such a way that there’s only one way to match it).
  • If you have multiple non-possessive repetitions that can match the same contents and are consecutive or are only separated by an optional separator or a separator that can be matched by both of the repetitions, the worst case matching time can be polynomial (O(n^c) where c is the number of problematic repetitions). For example a*b* is not a problem because a* and b* match different things and a*_a* is not a problem because the repetitions are separated by a '_' and can’t match that '_'. However, a*a* and .*_.* have quadratic runtime.
  • If you’re performing a partial match (such as by using re.search, re.split, re.findall etc.) and the regex is not anchored to the beginning of the string, quadratic runtime is especially hard to avoid because whenever a match fails, the regex engine will try again starting at the next index. This means that any unbounded repetition (even a possessive one), if it’s followed by a pattern that can fail, can cause quadratic runtime on some inputs. For example re.split(r"\s*,", my_str) will run in quadratic time on strings that consist entirely of spaces (or at least contain large sequences of spaces, not followed by a comma).

In order to rewrite your regular expression without these patterns, consider the following strategies:

  • If applicable, define a maximum number of expected repetitions using the bounded quantifiers, like {1,5} instead of + for instance.
  • Refactor nested quantifiers to limit the number of way the inner group can be matched by the outer quantifier, for instance this nested quantifier situation (ba+)+ doesn’t cause performance issues, indeed, the inner group can be matched only if there exists exactly one b char per repetition of the group.
  • Optimize regular expressions with possessive quantifiers and atomic grouping (available since Python 3.11).
  • Use negated character classes instead of . to exclude separators where applicable. For example the quadratic regex .*_.* can be made linear by changing it to [^_]*_.*

Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when using partial matches, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:

  • Solve the problem without regular expressions
  • Use an alternative non-backtracking regex implementations such as Google’s RE2.
  • Use multiple passes. This could mean pre- and/or post-processing the string manually before/after applying the regular expression to it or using multiple regular expressions. One example of this would be to replace re.split("\s*,\s*", my_str) with re.split(",", my_str) and then trimming the spaces from the strings as a second step.

See

python:S6245

This rule is deprecated, and will eventually be removed.

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

bucket = s3.Bucket(self,"bucket",
    encryption=s3.BucketEncryption.UNENCRYPTED       # Sensitive
)

The default value of encryption is KMS if encryptionKey is set. Otherwise, if both parameters are absent the bucket is unencrypted.

Compliant Solution

Server-side encryption with Amazon S3-Managed Keys is used:

bucket = s3.Bucket(self,"bucket",
    encryption=s3.BucketEncryption.S3_MANAGED
)

# Alternatively with a KMS key managed by the user.

bucket = s3.Bucket(self,"bucket",
    encryptionKey=access_key
)

See

python:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PUBLIC_READ, PUBLIC_READ_WRITE grant respectively "read" and "read and write" privileges to everyone in the world (AllUsers group).
  • AUTHENTICATED_READ grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, i.e., to grant necessary permissions only to users for their required tasks. In the context of canned ACL, set it to PRIVATE (the default one), and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users (ie: anyone in the world authenticated or not) have read and write permissions with the PUBLIC_READ_WRITE access control:

bucket = s3.Bucket(self, "bucket",
    access_control=s3.BucketAccessControl.PUBLIC_READ_WRITE     # Sensitive
)

s3deploy.BucketDeployment(self, "DeployWebsite",
    access_control=s3.BucketAccessControl.PUBLIC_READ_WRITE     # Sensitive
)

Compliant Solution

With the PRIVATE access control (default), only the bucket owner has the read/write permissions on the buckets and its ACL.

bucket = s3.Bucket(self, "bucket",
    access_control=s3.BucketAccessControl.PRIVATE       # Compliant
)

# Another example
s3deploy.BucketDeployment(self, "DeployWebsite",
    access_control=s3.BucketAccessControl.PRIVATE       # Compliant
)

See

python:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

from aws_cdk.aws_iam import PolicyStatement, AnyPrincipal, Effect
from aws_cdk.aws_s3 import Bucket

bucket = Bucket(self, "ExampleBucket")

bucket.add_to_resource_policy(PolicyStatement(
  effect=Effect.ALLOW,
  actions=["s3:*"],
  resources=[bucket.arn_for_objects("*")],
  principals=[AnyPrincipal()] # Sensitive
))

Compliant Solution

This policy allows only the authorized users:

from aws_cdk.aws_iam import PolicyStatement, AccountRootPrincipal, Effect
from aws_cdk.aws_s3 import Bucket

bucket = Bucket(self, "ExampleBucket")

bucket.add_to_resource_policy(PolicyStatement(
  effect=Effect.ALLOW,
  actions=["s3:*"],
  resources=[bucket.arn_for_objects("*")],
  principals=[AccountRootPrincipal()]
))

See

python:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_cdk.aws_ec2.Volume:

from aws_cdk.aws_ec2 import Volume

class EBSVolumeStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        Volume(self,
            "unencrypted-explicit",
            availability_zone="eu-west-1a",
            size=Size.gibibytes(1),
            encrypted=False  # Sensitive
        )
from aws_cdk.aws_ec2 import Volume

class EBSVolumeStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        Volume(self,
            "unencrypted-implicit",
            availability_zone="eu-west-1a",
            size=Size.gibibytes(1)
        ) # Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_ec2.Volume:

from aws_cdk.aws_ec2 import Volume

class EBSVolumeStack(Stack):

    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        Volume(self,
            "encrypted-explicit",
            availability_zone="eu-west-1a",
            size=Size.gibibytes(1),
            encrypted=True
        )

See

python:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Only use random number generators which are recommended by OWASP or any other trusted organization.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

import random

random.getrandbits(1) # Sensitive
random.randint(0,9) # Sensitive
random.random()  # Sensitive

# the following functions are sadly used to generate salt by selecting characters in a string ex: "abcdefghijk"...
random.sample(['a', 'b'], 1)  # Sensitive
random.choice(['a', 'b'])  # Sensitive
random.choices(['a', 'b'])  # Sensitive

See

python:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Python Standard Library

Code examples

Noncompliant code example

import ssl

ssl.SSLContext(ssl.PROTOCOL_SSLv3) # Noncompliant

Compliant solution

import ssl

context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
context.minimum_version = ssl.TLSVersion.TLSv1_2

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

python:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in pyca

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = rsa.generate_private_key(key_size = 1024, backend = backend) # Noncompliant
public_key  = private_key.public_key()

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

from cryptography.hazmat.primitives.asymmetric import dsa
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = dsa.generate_private_key(key_size = 1024, backend = backend) # Noncompliant
public_key  = private_key.public_key()

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name:

from cryptography.hazmat.primitives.asymmetric import ec
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = ec.generate_private_key(curve=ec.SECT163R2(), backend=backend)  # Noncompliant
public_key  = private_key.public_key()

Compliant solution

from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = rsa.generate_private_key(key_size = 3072, backend = backend)
public_key  = private_key.public_key()
from cryptography.hazmat.primitives.asymmetric import dsa
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = dsa.generate_private_key(key_size = 3072, backend = backend)
public_key  = private_key.public_key()
from cryptography.hazmat.primitives.asymmetric import ec
from cryptography.hazmat.backends import default_backend

backend = default_backend()

private_key = ec.generate_private_key(curve=ec.SECP521R1(), backend=backend)
public_key  = private_key.public_key()

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms is mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC-based algorithms.

Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:

  • NIST P curves with a size of at least 224 bits, e.g. secp256r1.
  • Curve25519, generally known as ed25519 or x25519 depending on its application.
  • Curve448.
  • Brainpool curves with a size of at least 224 bits, e.g. brainpoolP224r1

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

python:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

Django application startup:

from django.conf import settings

settings.configure(DEBUG=True)  # Sensitive when set to True
settings.configure(DEBUG_PROPAGATE_EXCEPTIONS=True)  # Sensitive when set to True

def custom_config(config):
    settings.configure(default_settings=config, DEBUG=True)  # Sensitive

Inside settings.py or global_settings.py, which are the default configuration files for a Django application:

DEBUG = True  # Sensitive
DEBUG_PROPAGATE_EXCEPTIONS = True  # Sensitive

Flask application startup:

from flask import Flask

app = Flask()
app.debug = True  # Sensitive
app.run(debug=True)  # Sensitive

Compliant Solution

from django.conf import settings

settings.configure(DEBUG=False)
settings.configure(DEBUG_PROPAGATE_EXCEPTIONS=False)

def custom_config(config):
    settings.configure(default_settings=config, DEBUG=False)
DEBUG = False
DEBUG_PROPAGATE_EXCEPTIONS = False
from flask import Flask

app = Flask()
app.debug = False
app.run(debug=False)

See

python:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

cryptography module

from cryptography.fernet import Fernet
from cryptography.hazmat.primitives.ciphers.aead import ChaCha20Poly1305, AESGCM, AESCCM
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives.ciphers import Cipher


def encrypt(key):
    Fernet(key)  # Sensitive
    ChaCha20Poly1305(key)  # Sensitive
    AESGCM(key)  # Sensitive
    AESCCM(key)  # Sensitive


private_key = rsa.generate_private_key()  # Sensitive


def encrypt2(algorithm, mode, backend):
    Cipher(algorithm, mode, backend)  # Sensitive

pynacl library

from nacl.public import Box
from nacl.secret import SecretBox


def public_encrypt(secret_key, public_key):
    Box(secret_key, public_key)  # Sensitive


def secret_encrypt(key):
    SecretBox(key)  # Sensitive

See

python:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For tarfile module:

import tarfile

tfile = tarfile.open("TarBomb.tar")
tfile.extractall('./tmp/')  # Sensitive
tfile.close()

For zipfile module:

import zipfile

zfile = zipfile.ZipFile('ZipBomb.zip', 'r')
zfile.extractall('./tmp/') # Sensitive
zfile.close()

Compliant Solution

For tarfile module:

import tarfile

THRESHOLD_ENTRIES = 10000
THRESHOLD_SIZE = 1000000000
THRESHOLD_RATIO = 10

totalSizeArchive = 0;
totalEntryArchive = 0;

tfile = tarfile.open("TarBomb.tar")
for entry in tfile:
  tarinfo = tfile.extractfile(entry)

  totalEntryArchive += 1
  sizeEntry = 0
  result = b''
  while True:
    sizeEntry += 1024
    totalSizeArchive += 1024

    ratio = sizeEntry / entry.size
    if ratio > THRESHOLD_RATIO:
      # ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack
      break

    chunk = tarinfo.read(1024)
    if not chunk:
      break

    result += chunk

  if totalEntryArchive > THRESHOLD_ENTRIES:
    # too much entries in this archive, can lead to inodes exhaustion of the system
    break

  if totalSizeArchive > THRESHOLD_SIZE:
    # the uncompressed data size is too much for the application resource capacity
    break

tfile.close()

For zipfile module:

import zipfile

THRESHOLD_ENTRIES = 10000
THRESHOLD_SIZE = 1000000000
THRESHOLD_RATIO = 10

totalSizeArchive = 0;
totalEntryArchive = 0;

zfile = zipfile.ZipFile('ZipBomb.zip', 'r')
for zinfo in zfile.infolist():
    print('File', zinfo.filename)
    data = zfile.read(zinfo)

    totalEntryArchive += 1

    totalSizeArchive = totalSizeArchive + len(data)
    ratio = len(data) / zinfo.compress_size
    if ratio > THRESHOLD_RATIO:
      # ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack
      break

    if totalSizeArchive > THRESHOLD_SIZE:
      # the uncompressed data size is too much for the application resource capacity
      break

    if totalEntryArchive > THRESHOLD_ENTRIES:
      # too much entries in this archive, can lead to inodes exhaustion of the system
      break

zfile.close()

See

python:S5300

This rule is deprecated, and will eventually be removed.

Sending emails is security-sensitive and can expose an application to a large range of vulnerabilities.

Information Exposure

Emails often contain sensitive information which might be exposed to an attacker if he can add an arbitrary address to the recipient list.

Spamming / Phishing

Malicious user can abuse email based feature to send spam or phishing content.

Dangerous Content Injection

Emails can contain HTML and JavaScript code, thus they can be used for XSS attacks.

Email Headers Injection

Email fields such as subject, to, cc, bcc, from are set in email "headers".  Using unvalidated user input to set those fields might allow attackers to inject new line characters in headers to craft malformed SMTP requests. Although modern libraries are filtering new line character by default, user data used in email "headers" should always be validated.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Unvalidated user input are used to set email headers.
  • Email content contains data provided by users and it is not sanitized.
  • Email recipient list or body are based on user inputs.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use an email library which sanitizes headers (Flask-Mail or django.core.mail).
  • Use html escape functions to sanitize every piece of data used to in the email body.
  • Verify application logic to make sure that email base feature can not be abuse to:
    • Send arbitrary email for spamming or fishing
    • Disclose sensitive email content

Sensitive Code Example

smtplib

import smtplib

def send(from_email, to_email, msg):
  server = smtplib.SMTP('localhost', 1025)
  server.sendmail(from_email, to_email, msg) # Sensitive

Django

from django.core.mail import send_mail

def send(subject, msg, from_email, to_email):
  send_mail(subject, msg, from_email, [to_email]) # Sensitive

Flask-Mail

from flask import Flask
from flask_mail import Mail, Message

app = Flask(__name__)

def send(subject, msg, from_email, to_email):
    mail = Mail(app)
    msg = Message(subject, [to_email], body, sender=from_email)
    mail.send(msg) # Sensitive{code}

See

python:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext.

Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution.

For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in PyCrypto

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

from Crypto.Cipher import AES

AES.new(key, AES.MODE_ECB) # Noncompliant

Example with an asymmetric cipher, RSA:

from Crypto.Cipher import PKCS1_v1_5

PKCS1_v1_5.new(key) # Noncompliant

Compliant solution

Since PyCrypto is not supported anymore, another library should be used. In the current context, Cryptodome uses a similar API.

For the AES symmetric cipher, use the GCM mode:

from Crypto.Cipher import AES

AES.new(key, AES.MODE_GCM)

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

from Crypto.Cipher import PKCS1_OAEP

PKCS1_OAEP.new(key)

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: use authenticated encryption modes

The best-known authenticated encryption mode for AES is Galois/Counter mode (GCM).

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

python:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Cryptodome

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

from Crypto.Cipher import DES # pycryptodome
from Cryptodome.Cipher import DES # pycryptodomex

cipher = DES.new(key, DES.MODE_OFB) # Noncompliant

Compliant solution

from Crypto.Cipher import AES # pycryptodome
from Cryptodome.Cipher import AES # pycryptodomex

cipher = AES.new(key, AES.MODE_CCM)

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

python:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in PyJWT

Code examples

The following code contains an example of JWT decoding without verification of the signature.

Noncompliant code example

import jwt

jwt.decode(token, options={"verify_signature":False}) # Noncompliant

Compliant solution

By default, verification is enabled for the method decode.

import jwt

jwt.decode(token, key, algorithms="HS256")

How does this work?

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue, follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take when encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

python:S6252

S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

bucket = s3.Bucket(self, "bucket",
    versioned=False       # Sensitive
)

The default value of versioned is False so the absence of this parameter is also sensitive.

Compliant Solution

bucket = s3.Bucket(self, "bucket",
    versioned=True
)

See

python:S2257

The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Standard algorithms like Argon2PasswordHasher, BCryptPasswordHasher, …​ should be used instead.

This rule tracks creation of BasePasswordHasher subclasses for Django applications.

Recommended Secure Coding Practices

  • Use a standard algorithm instead of creating a custom one.

Sensitive Code Example

class CustomPasswordHasher(BasePasswordHasher):  # Sensitive
    # ...

See

python:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

Flask:

from flask import Response

@app.route('/')
def index():
    response = Response()
    response.set_cookie('key', 'value') # Sensitive
    return response

Compliant Solution

Flask:

from flask import Response

@app.route('/')
def index():
    response = Response()
    response.set_cookie('key', 'value', httponly=True) # Compliant
    return response

See

python:S4433

Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:

  • Anonymous Authentication
  • Unauthenticated Authentication
  • Name/Password Authentication

A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials.

Why is this an issue?

When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory.

What is the potential impact?

An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores.

Authentication bypass

If attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider.

In such a case, all users configured in the directory might see their identity and privileges taken over.

Sensitive information leak

If attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information.

Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider.

If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law.

How to fix it

Code examples

The following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism.

Noncompliant code example

import ldap

def init_ldap():
   connect = ldap.initialize('ldap://example:1389')

   connect.simple_bind('cn=root') # Noncompliant
   connect.simple_bind_s('cn=root') # Noncompliant
   connect.bind_s('cn=root', None) # Noncompliant
   connect.bind('cn=root', None) # Noncompliant

Compliant solution

import ldap
import os

def init_ldap():
   connect = ldap.initialize('ldap://example:1389')

   connect.simple_bind('cn=root', os.environ.get('LDAP_PASSWORD'))
   connect.simple_bind_s('cn=root', os.environ.get('LDAP_PASSWORD'))
   connect.bind_s('cn=root', os.environ.get('LDAP_PASSWORD'))
   connect.bind('cn=root', os.environ.get('LDAP_PASSWORD'))

Resources

Documentation

Standards

python:S4784

This rule is deprecated; use S5852, S2631 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as (a+)+s will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following characters: *+{.

Example: (a+)*

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Sensitive Code Example

Django

from django.core.validators import RegexValidator
from django.urls import re_path

RegexValidator('(a*)*b')  # Sensitive

def define_http_endpoint(view):
    re_path(r'^(a*)*b/$', view)  # Sensitive

re module

import re
from re import compile, match, search, fullmatch, split, findall, finditer, sub, subn


input = 'input string'
replacement = 'replacement'

re.compile('(a*)*b')  # Sensitive
re.match('(a*)*b', input)  # Sensitive
re.search('(a*)*b', input)  # Sensitive
re.fullmatch('(a*)*b', input)  # Sensitive
re.split('(a*)*b', input)  # Sensitive
re.findall('(a*)*b', input)  # Sensitive
re.finditer('(a*)*b',input)  # Sensitive
re.sub('(a*)*b', replacement, input)  # Sensitive
re.subn('(a*)*b', replacement, input)  # Sensitive

regex module

import regex
from regex import compile, match, search, fullmatch, split, findall, finditer, sub, subn, subf, subfn, splititer

input = 'input string'
replacement = 'replacement'

regex.subf('(a*)*b', replacement, input)  # Sensitive
regex.subfn('(a*)*b', replacement, input)  # Sensitive
regex.splititer('(a*)*b', input)  # Sensitive

regex.compile('(a*)*b')  # Sensitive
regex.match('(a*)*b', input)  # Sensitive
regex.search('(a*)*b', input)  # Sensitive
regex.fullmatch('(a*)*b', input)  # Sensitive
regex.split('(a*)*b', input)  # Sensitive
regex.findall('(a*)*b', input)  # Sensitive
regex.finditer('(a*)*b',input)  # Sensitive
regex.sub('(a*)*b', replacement, input)  # Sensitive
regex.subn('(a*)*b', replacement, input)  # Sensitive

Exceptions

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

python:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

import hashlib
m = hashlib.md5() // Sensitive
import hashlib
m = hashlib.sha1() // Sensitive
import md5 // Sensitive and deprecated since Python 2.5; use the hashlib module instead.
m = md5.new()

import sha // Sensitive and deprecated since Python 2.5; use the hashlib module instead.
m = sha.new()

Compliant Solution

import hashlib
m = hashlib.sha512() // Compliant

See

python:S4792

This rule is deprecated, and will eventually be removed.

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

import logging
from logging import Logger, Handler, Filter
from logging.config import fileConfig, dictConfig

logging.basicConfig()  # Sensitive

logging.disable()  # Sensitive


def update_logging(logger_class):
    logging.setLoggerClass(logger_class)  # Sensitive


def set_last_resort(last_resort):
    logging.lastResort = last_resort  # Sensitive


class CustomLogger(Logger):  # Sensitive
    pass


class CustomHandler(Handler):  # Sensitive
    pass


class CustomFilter(Filter):  # Sensitive
    pass


def update_config(path, config):
    fileConfig(path)  # Sensitive
    dictConfig(config)  # Sensitive

See

python:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Python Standard Library

Code examples

The following code contains examples of disabled hostname validation.

Certificate validation is not enabled by default when _create_unverified_context or _create_stdlib_context is used. It is recommended to use create_default_context, without explicitly setting check_hostname to False.
Doing so creates a secure context that validates both hostnames and certificates.

Noncompliant code example

import ssl

example = ssl._create_stdlib_context() # Noncompliant

example = ssl._create_default_https_context()
example.check_hostname = False # Noncompliant

Compliant solution

import ssl

example = ssl.create_default_context()

example = ssl._create_default_https_context()

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

python:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies to be set on a S3 bucket the following booleans settings can be enabled:

  • block_public_acls: to block or not public ACLs to be set to the S3 bucket.
  • ignore_public_acls: to consider or not existing public ACLs set to the S3 bucket.
  • block_public_policy: to block or not public policies to be set to the S3 bucket.
  • restrict_public_buckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

The other attribute BlockPublicAccess.BLOCK_ACLS only turns on block_public_acls and ignore_public_acls. The public policies can still affect the S3 bucket.

However, all of those options can be enabled by setting the block_public_access property of the S3 bucket to BlockPublicAccess.BLOCK_ALL.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • block_public_acls to True to block new attempts to set public ACLs.
  • ignore_public_acls to True to block existing public ACLs.
  • block_public_policy to True to block new attempts to set public policies.
  • restrict_public_buckets to True to restrict existing public policies.

Sensitive Code Example

By default, when not set, the block_public_access is fully deactivated (nothing is blocked):

bucket = s3.Bucket(self,
    "bucket"        # Sensitive
)

This block_public_access allows public ACL to be set:

bucket = s3.Bucket(self,
    "bucket",
    block_public_access=s3.BlockPublicAccess(
        block_public_acls=False,       # Sensitive
        ignore_public_acls=True,
        block_public_policy=True,
        restrict_public_buckets=True
    )
)

The attribute BLOCK_ACLS only blocks and ignores public ACLs:

bucket = s3.Bucket(self,
    "bucket",
    block_public_access=s3.BlockPublicAccess.BLOCK_ACLS     # Sensitive
)

Compliant Solution

This block_public_access blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

bucket = s3.Bucket(self,
    "bucket",
    block_public_access=s3.BlockPublicAccess.BLOCK_ALL # Compliant
)

A similar configuration to the one above can obtained by setting all parameters of the block_public_access

bucket = s3.Bucket(self, "bucket",
    block_public_access=s3.BlockPublicAccess(       # Compliant
        block_public_acls=True,
        ignore_public_acls=True,
        block_public_policy=True,
        restrict_public_buckets=True
    )
)

See

python:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

The wildcard "*" is specified as the resource for this PolicyStatement. This grants the update permission for all policies of the account:

from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement

PolicyDocument(
    statements=[
        PolicyStatement(
            effect=Effect.ALLOW,
            actions="iam:CreatePolicyVersion",
            resources=["*"] # Sensitive
        )
    ]
)

Compliant Solution

Restrict the update permission to the appropriate subset of policies:

from aws_cdk import Aws
from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement

PolicyDocument(
    statements=[
        PolicyStatement(
            effect=Effect.ALLOW,
            actions="iam:CreatePolicyVersion",
            resources=[f"arn:aws:iam::{Aws.ACCOUNT_ID}:policy/team1/*"]
        )
    ]
)

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

python:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

username = 'admin'
password = 'admin' # Sensitive
usernamePassword = 'user=admin&password=admin' # Sensitive

Compliant Solution

import os

username = os.getenv("username") # Compliant
password = os.getenv("password") # Compliant
usernamePassword = 'user=%s&password=%s' % (username, password) # Compliant{code}

See

python:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

url = "http://example.com" # Sensitive
url = "ftp://anonymous@example.com" # Sensitive
url = "telnet://anonymous@example.com" # Sensitive

import telnetlib
cnx = telnetlib.Telnet("towel.blinkenlights.nl") # Sensitive

import ftplib
cnx = ftplib.FTP("ftp.example.com") # Sensitive

import smtplib
smtp = smtplib.SMTP("smtp.example.com", port=587) # Sensitive

For aws_cdk.aws_elasticloadbalancingv2.ApplicationLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

lb = elbv2.ApplicationLoadBalancer(
    self,
    "LB",
    vpc=vpc,
    internet_facing=True
)

lb.add_listener(
    "Listener-default",
    port=80, # Sensitive
    open=True
)
lb.add_listener(
    "Listener-http-explicit",
    protocol=elbv2.ApplicationProtocol.HTTP, # Sensitive
    port=8080,
    open=True
)

For aws_cdk.aws_elasticloadbalancingv2.ApplicationListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.ApplicationListener(
    self,
    "listener-http-explicit-const",
    load_balancer=lb,
    protocol=elbv2.ApplicationProtocol.HTTP, # Sensitive
    port=8081,
    open=True
)

For aws_cdk.aws_elasticloadbalancingv2.NetworkLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)
lb = elbv2.NetworkLoadBalancer(
    self,
    "LB",
    vpc=vpc,
    internet_facing=True
)

lb.add_listener( # Sensitive
    "Listener-default",
    port=1234
)
lb.add_listener(
    "Listener-TCP-explicit",
    protocol=elbv2.Protocol.TCP, # Sensitive
    port=1337
)

For aws_cdk.aws_elasticloadbalancingv2.NetworkListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.NetworkListener(
    self,
    "Listener-TCP-explicit",
    protocol=elbv2.Protocol.TCP, # Sensitive
    port=1338,
    load_balancer=lb
)

For aws_cdk.aws_elasticloadbalancingv2.CfnListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.CfnListener(
    self,
    "listener-http",
    default_actions=[application_default_action],
    load_balancer_arn=lb.load_balancer_arn,
    protocol="HTTP", # Sensitive
    port=80
)

elbv2.CfnListener(
    self,
    "listener-tcp",
    default_actions=[network_default_action],
    load_balancer_arn=lb.load_balancer_arn,
    protocol="TCP", # Sensitive
    port=1000
)

For aws_cdk.aws_elasticloadbalancing.LoadBalancerListener:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.LoadBalancerListener(
    external_port=10000,
    external_protocol=elb.LoadBalancingProtocol.TCP, # Sensitive
    internal_port=10000
)

elb.LoadBalancerListener(
    external_port=10080,
    external_protocol=elb.LoadBalancingProtocol.HTTP, # Sensitive
    internal_port=10080
)

For aws_cdk.aws_elasticloadbalancing.CfnLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancing as elb
)

elb.CfnLoadBalancer(
    self,
    "elb-tcp",
    listeners=[
        elb.CfnLoadBalancer.ListenersProperty(
            instance_port="10000",
            load_balancer_port="10000",
            protocol="tcp" # Sensitive
        )
    ],
    subnets=vpc.select_subnets().subnet_ids
)

elb.CfnLoadBalancer(
    self,
    "elb-http-dict",
    listeners=[
        {
            "instancePort":"10000",
            "loadBalancerPort":"10000",
            "protocol":"http" # Sensitive
        }
    ],
    subnets=vpc.select_subnets().subnet_ids
)

For aws_cdk.aws_elasticloadbalancing.LoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.LoadBalancer(
    self,
    "elb-tcp-dict",
    vpc=vpc,
    listeners=[
        {
            "externalPort":10000,
            "externalProtocol":elb.LoadBalancingProtocol.TCP, # Sensitive
            "internalPort":10000
        }
    ]
)

loadBalancer.add_listener(
    external_port=10081,
    external_protocol=elb.LoadBalancingProtocol.HTTP, # Sensitive
    internal_port=10081
)
loadBalancer.add_listener(
    external_port=10001,
    external_protocol=elb.LoadBalancingProtocol.TCP, # Sensitive
    internal_port=10001
)

For aws_cdk.aws_elasticache.CfnReplicationGroup:

from aws_cdk import (
    aws_elasticache as elasticache
)

elasticache.CfnReplicationGroup(
    self,
    "unencrypted-explicit",
    replication_group_description="a replication group",
    automatic_failover_enabled=False,
    transit_encryption_enabled=False, # Sensitive
    cache_subnet_group_name="test",
    engine="redis",
    engine_version="3.2.6",
    num_cache_clusters=1,
    cache_node_type="cache.t2.micro"
)

elasticache.CfnReplicationGroup( # Sensitive, encryption is disabled by default
    self,
    "unencrypted-implicit",
    replication_group_description="a test replication group",
    automatic_failover_enabled=False,
    cache_subnet_group_name="test",
    engine="redis",
    engine_version="3.2.6",
    num_cache_clusters=1,
    cache_node_type="cache.t2.micro"
)

For aws_cdk.aws_kinesis.CfnStream:

from aws_cdk import (
    aws_kinesis as kinesis,
)

kinesis.CfnStream( # Sensitive, encryption is disabled by default for CfnStreams
    self,
    "cfnstream-implicit-unencrytped",
    shard_count=1
)

kinesis.CfnStream(self,
    "cfnstream-explicit-unencrytped",
    shard_count=1,
    stream_encryption=None # Sensitive
)

For aws_cdk.aws_kinesis.Stream:

from aws_cdk import (
    aws_kinesis as kinesis,
)

stream = kinesis.Stream(self,
    "stream-explicit-unencrypted",
    shard_count=1,
    encryption=kinesis.StreamEncryption.UNENCRYPTED # Sensitive
)

Compliant Solution

url = "https://example.com"
url = "sftp://anonymous@example.com"
url = "ssh://anonymous@example.com"

import ftplib
cnx = ftplib.FTP_TLS("ftp.example.com")

import smtplib
smtp = smtplib.SMTP("smtp.example.com", port=587)
smtp.starttls(context=context)

smtp_ssl = smtplib.SMTP_SSL("smtp.gmail.com", port=465)

For aws_cdk.aws_elasticloadbalancingv2.ApplicationLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

lb = elbv2.ApplicationLoadBalancer(
    self,
    "LB",
    vpc=vpc,
    internet_facing=True
)

lb.add_listener(
    "Listener-https-explicit",
    protocol=elbv2.ApplicationProtocol.HTTPS,
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=443,
    open=True
)

lb.add_listener(
    "Listener-https-implicit",
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=8443,
    open=True
)

For aws_cdk.aws_elasticloadbalancingv2.ApplicationListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.ApplicationListener(
    self,
    "listener-https-explicit-const",
    load_balancer=lb,
    protocol=elbv2.ApplicationProtocol.HTTPS,
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=444,
    open=True
)

For aws_cdk.aws_elasticloadbalancingv2.NetworkLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)
lb = elbv2.NetworkLoadBalancer(
    self,
    "LB",
    vpc=vpc,
    internet_facing=True
)

lb.add_listener(
    "Listener-TLS-explicit",
    protocol=elbv2.Protocol.TLS,
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=443
)
lb.add_listener(
    "Listener-TLS-implicit",
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=1024
)

For aws_cdk.aws_elasticloadbalancingv2.NetworkListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.NetworkListener(
    self,
    "Listener-TLS-explicit",
    protocol=elbv2.Protocol.TLS,
    certificates=[elbv2.ListenerCertificate("certificateARN")],
    port=443,
    load_balancer=lb
)

For aws_cdk.aws_elasticloadbalancingv2.CfnListener:

from aws_cdk import (
    aws_elasticloadbalancingv2 as elbv2,
)

elbv2.CfnListener(
    self,
    "listener-https",
    default_actions=[application_default_action],
    load_balancer_arn=lb.load_balancer_arn,
    protocol="HTTPS",
    port=443,
    certificates=[elbv2.CfnListener.CertificateProperty(
        certificate_arn="certificateARN"
    )]
)

elbv2.CfnListener(
    self,
    "listener-tls",
    default_actions=[network_default_action],
    load_balancer_arn=lb.load_balancer_arn,
    protocol="TLS",
    port=1001,
    certificates=[elbv2.CfnListener.CertificateProperty(
        certificate_arn="certificateARN"
    )]
)

For aws_cdk.aws_elasticloadbalancing.LoadBalancerListener:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.LoadBalancerListener(
    external_port=10043,
    external_protocol=elb.LoadBalancingProtocol.SSL,
    internal_port=10043,
    ssl_certificate_arn="certificateARN"
)

elb.LoadBalancerListener(
    external_port=10443,
    external_protocol=elb.LoadBalancingProtocol.HTTPS,
    internal_port=10443,
    ssl_certificate_arn="certificateARN"
)

For aws_cdk.aws_elasticloadbalancing.CfnLoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.CfnLoadBalancer(
    self,
    "elb-ssl",
    listeners=[
        elb.CfnLoadBalancer.ListenersProperty(
            instance_port="10043",
            load_balancer_port="10043",
            protocol="ssl",
            ssl_certificate_id=CERTIFICATE_ARN
        )
    ],
    subnets=vpc.select_subnets().subnet_ids
)

elb.CfnLoadBalancer(
    self,
    "elb-https-dict",
    listeners=[
        {
            "instancePort":"10443",
            "loadBalancerPort":"10443",
            "protocol":"https",
            "sslCertificateId":CERTIFICATE_ARN
        }
    ],
    subnets=vpc.select_subnets().subnet_ids
)

For aws_cdk.aws_elasticloadbalancing.LoadBalancer:

from aws_cdk import (
    aws_elasticloadbalancing as elb,
)

elb.LoadBalancer(
    self,
    "elb-ssl",
    vpc=vpc,
    listeners=[
        {
            "externalPort":10044,
            "externalProtocol":elb.LoadBalancingProtocol.SSL,
            "internalPort":10044,
            "sslCertificateArn":"certificateARN"
        },
        {
            "externalPort":10444,
            "externalProtocol":elb.LoadBalancingProtocol.HTTPS,
            "internalPort":10444,
            "sslCertificateArn":"certificateARN"
        }
    ]
)

loadBalancer = elb.LoadBalancer(
        self,
        "elb-multi-listener",
        vpc=vpc
)
loadBalancer.add_listener(
    external_port=10045,
    external_protocol=elb.LoadBalancingProtocol.SSL,
    internal_port=10045,
    ssl_certificate_arn="certificateARN"
)
loadBalancer.add_listener(
    external_port=10445,
    external_protocol=elb.LoadBalancingProtocol.HTTPS,
    internal_port=10445,
    ssl_certificate_arn="certificateARN"
)

For aws_cdk.aws_elasticache.CfnReplicationGroup:

from aws_cdk import (
    aws_elasticache as elasticache
)

elasticache.CfnReplicationGroup(
    self,
    "encrypted-explicit",
    replication_group_description="a test replication group",
    automatic_failover_enabled=False,
    transit_encryption_enabled=True,
    cache_subnet_group_name="test",
    engine="redis",
    engine_version="3.2.6",
    num_cache_clusters=1,
    cache_node_type="cache.t2.micro"
)

For aws_cdk.aws_kinesis.CfnStream:

from aws_cdk import (
    aws_kinesis as kinesis,
)

kinesis.CfnStream(
    self,
    "cfnstream-explicit-encrytped",
    shard_count=1,
    stream_encryption=kinesis.CfnStream.StreamEncryptionProperty(
        encryption_type="KMS",
        key_id="alias/aws/kinesis"
    )
)

stream = kinesis.CfnStream(
    self,
    "cfnstream-explicit-encrytped-dict",
    shard_count=1,
    stream_encryption={
        "encryptionType": "KMS",
        "keyId": "alias/aws/kinesis"
    }
)

For aws_cdk.aws_kinesis.Stream:

from aws_cdk import (
    aws_kinesis as kinesis,
    aws_kms as kms
)

stream = kinesis.Stream( # Encryption is enabled by default for Streams
    self,
    "stream-implicit-encrypted",
    shard_count=1
)

stream = kinesis.Stream(
    self,
    "stream-explicit-encrypted-managed",
    shard_count=1,
    encryption=kinesis.StreamEncryption.MANAGED
)

key = kms.Key(self, "managed_key")
stream = kinesis.Stream(
    self,
    "stream-explicit-encrypted-selfmanaged",
    shard_count=1,
    encryption=kinesis.StreamEncryption.KMS,
    encryption_key=key
)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

python:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy that grants all permissions by using the wildcard (*) in the Action property:

from aws_cdk.aws_iam import PolicyStatement, Effect

PolicyStatement(
    effect=Effect.ALLOW,
    actions=["*"], # Sensitive
    resources=["arn:aws:iam:::user/*"]
)

Compliant Solution

A customer-managed policy that grants only the required permissions:

from aws_cdk.aws_iam import PolicyStatement, Effect

PolicyStatement(
    effect=Effect.ALLOW,
    actions=["iam:GetAccountSummary"],
    resources=["arn:aws:iam:::user/*"]
)

See

python:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws_cdk.aws_rds.DatabaseCluster and aws_cdk.aws_rds.DatabaseInstance:

from aws_cdk import (
    aws_rds as rds
)

class DatabaseStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        rds.DatabaseCluster( # Sensitive, unencrypted by default
            self,
            "example"
        )

For aws_cdk.aws_rds.CfnDBCluster and aws_cdk.aws_rds.CfnDBInstance:

from aws_cdk import (
    aws_rds as rds
)

class DatabaseStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        rds.CfnDBCluster( # Sensitive, unencrypted by default
            self,
            "example"
        )

Compliant Solution

For aws_cdk.aws_rds.DatabaseCluster and aws_cdk.aws_rds.DatabaseInstance:

from aws_cdk import (
    aws_rds as rds
)

class DatabaseStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        rds.DatabaseCluster(
            self,
            "example",
            storage_encrypted=True
        )

For aws_cdk.aws_rds.CfnDBCluster and aws_cdk.aws_rds.CfnDBInstance:

from aws_cdk import (
    aws_rds as rds
)

class DatabaseStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        rds.CfnDBCluster(
            self,
            "example",
            storage_encrypted=True
        )

See

python:S6785

GraphQL servers are vulnerable to Denial of Service attacks when they fail to limit the depth of queries. In such a case, an attacker is able to craft complex, deeply nested queries to make the application unwillingly consume an important amount of resources.

Why is this an issue?

When a server receives a deeply nested query, it attempts to resolve all the requested data. This process can consume a substantial amount of computational resources, leading to a slowdown in server response times.

What is the potential impact?

A server that faces a resource exhaustion situation can become unstable. The exact impact will depend on how the affected application is deployed and how well the hosting server configuration is hardened.

In the worst case, when the application is deployed in an uncontained environment, directly on its host system, the memory exhaustion will affect the whole hosting server. The server’s operating system might start killing arbitrary memory-intensive processes, including the main application or other sensitive ones. This will result in a general operating failure, also known as a Denial of Service (DoS).

In cases where the application is deployed in a virtualized or otherwise contained environment, or where resource usage limits are in place, the consequences are limited to the vulnerable application only. In that case, other processes and applications hosted on the same server may keep on running without perturbation. The vulnerable application will still stop working properly.

In general, that kind of DoS attack can have severe financial consequences. They are particularly important when the affected systems are business-critical.

How to fix it

Code examples

Noncompliant code example

from graphql_server.flask import GraphQLView

app.add_url_rule("/api",
    view_func=GraphQLView.as_view(  # Noncompliant
        name="api",
        schema=schema,
    )
)

Compliant solution

from graphql_server.flask import GraphQLView
from graphene.validation import depth_limit_validator

app.add_url_rule("/api",
    view_func=GraphQLView.as_view(
        name="api",
        schema=schema,
        validation_rules=[
           depth_limit_validator(10) # Choose a value that fits your application's requirements
        ]
    )
)

How does this work?

Avoid circular references

A prerequisite for deeply nested query to be executed is the presence of circular references in the database schema. Avoid or minimize circular references when designing the application’s database schema.

Set limits

Limit the depth of the queries your server will accept. By setting a maximum depth, you can ensure that excessively nested queries are rejected. Remember, the values for maximum depth and complexity should be set according to your application’s specific needs. Setting these limits too low could restrict legitimate queries, while setting them too high could leave your server vulnerable to attacks.

The easiest way to set such a limit is to use the query validation API available from Graphene 3. Applications running Graphene 2 should consider upgrading to Graphene 3 to benefit from this API.

Resources

Standards

python:S6786

This vulnerability exposes information about all the APIs available on a GraphQL API server. This information can be used to discover weaknesses in the API that can be exploited.

Why is this an issue?

GraphQL introspection is a feature that allows client applications to query the schema of a GraphQL API at runtime. It provides a way for developers to explore and understand the available data and operations supported by the API.

This feature is a diagnostic tool that should only be used in the development phase as its presence also creates risks.

Clear documentation and API references should be considered better discoverability tools for a public GraphQL API.

What is the potential impact?

An attacker can use introspection to identify all of the operations and data types supported by the server. This information can then be used to identify potential targets for attacks.

Exploitation of private APIs

Even when a GraphQL API server is open to access by third-party applications, it may contain APIs that are intended only for private use. Introspection allows these private APIs to be discovered.

Private APIs often do not receive the same level of security rigor as public APIs. For example, they may skip input validation because the API is only expected to be called from trusted applications. This can create avenues for attack that are not present on public APIs.

Exposure of sensitive data

GraphQL allows for multiple related objects to be retrieved using a single API call. This provides an efficient method of obtaining data for use in a client application.

An attacker may be able to use these relationships between objects to traverse the data structure. They may be able to find a link to sensitive data that the developer did not intentionally make available.

How to fix it

Code examples

Noncompliant code example

from graphql_server.flask import GraphQLView

app.add_url_rule("/api",
    view_func=GraphQLView.as_view(  # Noncompliant
        name="api",
        schema=schema,
    )
)

Compliant solution

Make sure that introspection is disabled in production environments. You can use the following code sample as a reference, in conjunction with your own methods for distinguishing between production and non-production environments.

from graphql_server.flask import GraphQLView

# Only one of the following needs to be used
from graphql.validation import NoSchemaIntrospectionCustomRule # graphql-core v3
from graphene.validation import DisableIntrospection           # graphene v3


app.add_url_rule("/api",
    view_func=GraphQLView.as_view(
        name="api",
        schema=schema,
        validation_rules=[
            NoSchemaIntrospectionCustomRule,
            DisableIntrospection,
        ]
    )
)

How does this work?

Disabling introspection

The GraphQL server framework should be instructed to disable introspection in production environments. This prevents any attacker attempt to retrieve schema information from the server at runtime.

Each GraphQL framework will have a different method of doing this, possibly including:

  • Changing a simple boolean setting.
  • Adding a middleware module to the request processing chain.
  • Adding a GraphQL validator that rejects introspection keywords.

If introspection is required, it should only be made available to the smallest possible audience. This could include development environments, users with a specific right, or requests from a specific set of IP addresses.

Resources

Articles & blog posts

Standards

python:S6308

Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated.

To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, adversaries cannot access the data if they gain physical access to the storage medium.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt OpenSearch domains that contain sensitive information.

OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_opensearchservice.Domain:

from aws_cdk.aws_opensearchservice import Domain, EngineVersion

class DomainStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        Domain(self, "Sensitive",
            version=EngineVersion.OPENSEARCH_1_3
        ) # Sensitive, encryption is disabled by default

For aws_cdk.aws_opensearchservice.CfnDomain:

from aws_cdk.aws_opensearchservice import CfnDomain

class CfnDomainStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        CfnDomain(self, "Sensitive") # Sensitive, encryption is disabled by default

Compliant Solution

For aws_cdk.aws_opensearchservice.Domain:

from aws_cdk.aws_opensearchservice import Domain, EncryptionAtRestOptions, EngineVersion

class DomainStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        Domain(self, "Compliant",
            version=EngineVersion.OPENSEARCH_1_3,
            encryption_at_rest=EncryptionAtRestOptions(
                enabled=True
            )
        )

For aws_cdk.aws_opensearchservice.CfnDomain:

from aws_cdk.aws_opensearchservice import CfnDomain

class CfnDomainStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        CfnDomain(self, "Compliant",
            encryption_at_rest_options=CfnDomain.EncryptionAtRestOptionsProperty(
                enabled=True
            )
        )

See

python:S6781

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If a JWT secret key leaks to an unintended audience, it can have serious security implications for the corresponding application. The secret key is used to encode and decode JWTs when using a symmetric signing algorithm, and an attacker could potentially use it to perform malicious actions.

For example, an attacker could use the secret key to create their own authentication tokens that appear to be legitimate, allowing them to bypass authentication and gain access to sensitive data or functionality.

In the worst-case scenario, an attacker could be able to execute arbitrary code on the application by abusing administrative features, and take over its hosting server.

How to fix it in Flask

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Changing the secret value is sufficient to invalidate any data that it protected.

Code examples

Noncompliant code example

The following noncompliant code contains a hard-coded secret that can be exposed unintentionally.

from flask import Flask

app = Flask(__name__)
app.config['JWT_SECRET_KEY'] = secret_key  # Noncompliant

Compliant solution

A solution is to set this secret in an environment string.

from flask import Flask
import os

app = Flask(__name__)
app.config['JWT_SECRET_KEY'] = os.environ["JWT_SECRET_KEY"]

Going the extra mile

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Resources

Standards

Documentation

python:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

from django.db import models
from django.db import connection
from django.db import connections
from django.db.models.expressions import RawSQL

value = input()


class MyUser(models.Model):
    name = models.CharField(max_length=200)


def query_my_user(request, params, value):
    with connection.cursor() as cursor:
        cursor.execute("{0}".format(value))  # Sensitive

    # https://docs.djangoproject.com/en/2.1/ref/models/expressions/#raw-sql-expressions

    RawSQL("select col from %s where mycol = %s and othercol = " + value, ("test",))  # Sensitive

    # https://docs.djangoproject.com/en/2.1/ref/models/querysets/#extra

    MyUser.objects.extra(
        select={
            'mycol':  "select col from sometable here mycol = %s and othercol = " + value}, # Sensitive
           select_params=(someparam,),
        },
    )

Compliant Solution

cursor = connection.cursor(prepared=True)
sql_insert_query = """ select col from sometable here mycol = %s and othercol = %s """

select_tuple = (1, value)

cursor.execute(sql_insert_query, select_tuple) # Compliant, the query is parameterized
connection.commit()

See

python:S6317

Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access.

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

For such policies, it is easy to define very broad permissions (by using wildcard "*" permissions for example.) This is especially true if it is not yet clear which permissions will be required for a specific workload or use case. However, it is important to limit the amount of permissions that are granted and the amount of resources to which these permissions are granted. Doing so ensures that there are no users or roles that have more permissions than they need.

If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities.

What is the potential impact?

AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope.

Privilege escalation

When IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities.

For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account.

How to fix it in AWS CDK

Code examples

In this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

Noncompliant code example

from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement

PolicyDocument(
    statements=[
        PolicyStatement(
            effect=Effect.ALLOW,
            actions=["lambda:UpdateFunctionCode"],
            resources=["*"]  # Noncompliant
        )
    ]
)

Compliant solution

The policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed.

from aws_cdk.aws_iam import Effect, PolicyDocument, PolicyStatement

PolicyDocument(
    statements=[
        PolicyStatement(
            effect=Effect.ALLOW,
            actions=["lambda:UpdateFunctionCode"],
            resources=[
                "arn:aws:lambda:us-east-2:123456789012:function:my-function:1"
            ]
        )
    ]
)

How does this work?

Principle of least privilege

When creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else.

To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used.

Resources

Documentation

Articles & blog posts

Standards

python:S6437

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Application’s security downgrade

A downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component.

For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

The following code example is noncompliant because it uses a hardcoded secret value.

Noncompliant code example

from requests_oauthlib.oauth2_session import OAuth2Session

scope = ['https://www.api.example.com/auth/example.data']

oauth = OAuth2Session(
    'example_client_id',
    redirect_uri='https://callback.example.com/uri',
    scope=scope)

token = oauth.fetch_token(
        'https://api.example.com/o/oauth2/token',
        client_secret='example_Password') # Noncompliant

Compliant solution

from os import environ
from requests_oauthlib.oauth2_session import OAuth2Session

scope = ['https://www.api.example.com/auth/example.data']

oauth = OAuth2Session(
    'example_client_id',
    redirect_uri='https://callback.example.com/uri',
    scope=scope)

password = environ.get('OAUTH_SECRET')

token = oauth.fetch_token(
        'https://api.example.com/o/oauth2/token',
        client_secret=password)

How does this work?

While the noncompliant code example contains a hard-coded password, the compliant solution retrieves the secret’s value from its environment. This allows to have an environment-dependent secret value and avoids storing the password in the source code itself.

Depending on the application and its underlying infrastructure, how the secret gets added to the environment might change.

Resources

Documentation

Standards

python:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in Python Standard Library

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

import xml.sax

parser = xml.sax.make_parser()
myHandler = MyHandler()
parser.setContentHandler(myHandler)
parser.setFeature(feature_external_ges, True) # Noncompliant
parser.parse('xxe.xml')

Compliant solution

The SAX parser does not process general external entities by default since version 3.7.1.

import xml.sax

parser = xml.sax.make_parser()
myHandler = MyHandler()
parser.setContentHandler(myHandler)
parser.setFeature(feature_external_ges, False)
parser.parse('xxe.xml')

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

python:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sagemaker.CfnNotebookInstance:

from aws_cdk import (
    aws_sagemaker as sagemaker
)

class CfnSagemakerStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        sagemaker.CfnNotebookInstance(
            self, "Sensitive",
            instance_type="instanceType",
            role_arn="roleArn"
        )  # Sensitive, no KMS key is set by default; thus, encryption is disabled

Compliant Solution

For aws_cdk.aws_sagemaker.CfnNotebookInstance:

from aws_cdk import (
    aws_sagemaker as sagemaker,
    aws_kms as kms
)

class CfnSagemakerStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        my_key = kms.Key(self, "Key")
        sagemaker.CfnNotebookInstance(
            self, "Compliant",
            instance_type="instanceType",
            role_arn="roleArn",
            kms_key_id=my_key.key_id
        )

See

python:S5439

This rule is deprecated; use S5247 instead.

Why is this an issue?

Template engines have an HTML autoescape mechanism that protects web applications against most common cross-site-scripting (XSS) vulnerabilities.

By default, it automatically replaces HTML special characters in any template variables. This secure by design configuration should not be globally disabled.

Escaping HTML from template variables prevents switching into any execution context, like <script>. Disabling autoescaping forces developers to manually escape each template variable for the application to be safe. A more pragmatic approach is to escape by default and to manually disable escaping when needed.

A successful exploitation of a cross-site-scripting vulnerability by an attacker allow him to execute malicious JavaScript code in a user’s web browser. The most severe XSS attacks involve:

  • Forced redirection
  • Modify presentation of content
  • User accounts takeover after disclosure of sensitive information like session cookies or passwords

This rule supports the following libraries:

Noncompliant code example

from jinja2 import Environment

env = Environment() # Noncompliant; New Jinja2 Environment has autoescape set to false
env = Environment(autoescape=False) # Noncompliant

Compliant solution

from jinja2 import Environment
env = Environment(autoescape=True) # Compliant

Resources

python:S6779

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

If a Flask secret key leaks to an unintended audience, it can have serious security implications for the corresponding application. The secret key is used to sign cookies and other sensitive data so that an attacker could potentially use it to perform malicious actions.

For example, an attacker could use the secret key to create their own cookies that appear to be legitimate, allowing them to bypass authentication and gain access to sensitive data or functionality.

In the worst-case scenario, an attacker could be able to execute arbitrary code on the application and take over its hosting server.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

In Flask, changing the secret value is sufficient to invalidate any data that it protected.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

Noncompliant code example

from flask import Flask

app = Flask(__name__)
app.config['SECRET_KEY'] = "secret"  # Noncompliant

Compliant solution

from flask import Flask
import os

app = Flask(__name__)
app.config['SECRET_KEY'] = os.environ["SECRET_KEY"]

Resources

Standards

Documentation

python:S1523

This rule is deprecated, and will eventually be removed.

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon because they also increase the risk of maliciously Injected Code. Such attacks can either run on the server or in the client (example: XSS attack) and have a huge impact on an application’s security.

This rule marks for review each occurrence of such dynamic code execution. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

import os

value = input()
command = 'os.system("%s")' % value

def evaluate(command, file, mode):
    eval(command)  # Sensitive.

eval(command)  # Sensitive. Dynamic code

def execute(code, file, mode):
    exec(code)  # Sensitive.
    exec(compile(code, file, mode))  # Sensitive.

exec(command)  # Sensitive.

See

python:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

For os.umask:

os.umask(0)  # Sensitive

For os.chmod, os.lchmod, and os.fchmod:

os.chmod("/tmp/fs", stat.S_IRWXO)   # Sensitive
os.lchmod("/tmp/fs", stat.S_IRWXO)  # Sensitive
os.fchmod(fd, stat.S_IRWXO)         # Sensitive

Compliant Solution

For os.umask:

os.umask(0o777)

For os.chmod, os.lchmod, and os.fchmod:

os.chmod("/tmp/fs", stat.S_IRWXU)
os.lchmod("/tmp/fs", stat.S_IRWXU)
os.fchmod(fd, stat.S_IRWXU)

See

python:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

file = open("/tmp/temporary_file","w+") # Sensitive
tmp_dir = os.environ.get('TMPDIR') # Sensitive
file = open(tmp_dir+"/temporary_file","w+")

Compliant Solution

import tempfile

file = tempfile.TemporaryFile(dir="/tmp/my_subdirectory", mode='"w+") # Compliant

See

python:S5445

Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic.

Why is this an issue?

Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it.

In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues.

What is the potential impact?

Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it.

Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise.

Information disclosure

Because attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive.

For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements.

Attack surface extension

An application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise.

For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over.

How to fix it

Code examples

The following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function.

Noncompliant code example

import tempfile

filename = tempfile.mktemp() # Noncompliant
tmp_file = open(filename, "w+")

Compliant solution

import tempfile

tmp_file1 = tempfile.NamedTemporaryFile(delete=False)
tmp_file2 = tempfile.NamedTemporaryFile()

How does this work?

Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks.

Use a secure API function

Temporary files handling APIs generally provide secure functions to create temporary files. In most cases, they operate in an atomical way, creating and opening a file with a unique and unpredictable name in a single call. Those functions can often be used to replace less secure alternatives without requiring important development efforts.

Here, the example compliant code uses the more secure tempfile.NamedTemporaryFile function to handle the temporary file creation.

Strong security controls

Temporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose.

In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:

  • Files should be created in a non-public directory.
  • File names should be unique.
  • File names should be unpredictable. They should be generated using a cryptographically secure random generator.
  • File creation should fail if a target file already exists.

Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them.

Resources

Documentation

Standards

python:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

Exceptions

To securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:

  • Argon2
  • PBKDF2
  • Scrypt
  • Bcrypt

When they are used for password storage, using a secure, random salt is required.

However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted.

How to fix it in Python Standard Library

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

import hashlib

hash = hashlib.scrypt(password, salt=b"F3MdWpeHeeSjlUxvKBnzzA", n=2**17, r=8, p=1) # Noncompliant

Compliant solution

import hashlib
import secrets

salt = secrets.token_bytes(32)
hash = hashlib.scrypt(password, salt=salt, n=2**17, r=8, p=1)

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards.

Here, the compliant code example ensures the salt is random and has a sufficient length by calling the secrets.token_bytes function. This one internally uses a cryptographically secure pseudo random number generator.

Resources

Standards

python:S3752

An HTTP method is safe when used to perform a read-only operation, such as retrieving information. In contrast, an unsafe HTTP method is used to change the state of an application, for instance to update a user’s profile on a web application.

Common safe HTTP methods are GET, HEAD, or OPTIONS.

Common unsafe HTTP methods are POST, PUT and DELETE.

Allowing both safe and unsafe HTTP methods to perform a specific operation on a web application could impact its security, for example CSRF protections are most of the time only protecting operations performed by unsafe HTTP methods.

Ask Yourself Whether

  • HTTP methods are not defined at all for a route/controller of the application.
  • Safe HTTP methods are defined and used for a route/controller that can change the state of an application.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

For all the routes/controllers of an application, the authorized HTTP methods should be explicitly defined and safe HTTP methods should only be used to perform read-only operations.

Sensitive Code Example

For Django:

# No method restriction
def view(request):  # Sensitive
    return HttpResponse("...")
@require_http_methods(["GET", "POST"])  # Sensitive
def view(request):
    return HttpResponse("...")

For Flask:

@methods.route('/sensitive', methods=['GET', 'POST'])  # Sensitive
def view():
    return Response("...", 200)

Compliant Solution

For Django:

@require_http_methods(["POST"])
def view(request):
    return HttpResponse("...")
@require_POST
def view(request):
    return HttpResponse("...")
@require_GET
def view(request):
    return HttpResponse("...")
@require_safe
def view(request):
    return HttpResponse("...")

For Flask:

@methods.route('/compliant1')
def view():
    return Response("...", 200)
@methods.route('/compliant2', methods=['GET'])
def view():
    return Response("...", 200)

See

python:S4721

This rule is deprecated, and will eventually be removed.

Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands.

Ask Yourself Whether

  • OS command name or parameters are user-controlled.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use functions that don’t spawn a shell.

Sensitive Code Example

Python 3

subprocess.run(cmd, shell=True)  # Sensitive
subprocess.Popen(cmd, shell=True)  # Sensitive
subprocess.call(cmd, shell=True)  # Sensitive
subprocess.check_call(cmd, shell=True)  # Sensitive
subprocess.check_output(cmd, shell=True)  # Sensitive
os.system(cmd)  # Sensitive: a shell is always spawn

Python 2

cmd = "when a string is passed through these function, a shell is spawn"
(_, child_stdout, _) = os.popen2(cmd)  # Sensitive
(_, child_stdout, _) = os.popen3(cmd)  # Sensitive
(_, child_stdout) = os.popen4(cmd)  # Sensitive


(child_stdout, _) = popen2.popen2(cmd)  # Sensitive
(child_stdout, _, _) = popen2.popen3(cmd)  # Sensitive
(child_stdout, _) = popen2.popen4(cmd)  # Sensitive

Compliant Solution

Python 3

# by default shell=False, a shell is not spawn
subprocess.run(cmd)  # Compliant
subprocess.Popen(cmd)  # Compliant
subprocess.call(cmd)  # Compliant
subprocess.check_call(cmd)  # Compliant
subprocess.check_output(cmd)  # Compliant

# always in a subprocess:
os.spawnl(mode, path, *cmd)  # Compliant
os.spawnle(mode, path, *cmd, env)  # Compliant
os.spawnlp(mode, file, *cmd)  # Compliant
os.spawnlpe(mode, file, *cmd, env)  # Compliant
os.spawnv(mode, path, cmd)  # Compliant
os.spawnve(mode, path, cmd, env)  # Compliant
os.spawnvp(mode, file, cmd)  # Compliant
os.spawnvpe(mode, file, cmd, env)  # Compliant

(child_stdout) = os.popen(cmd, mode, 1)  # Compliant
(_, output) = subprocess.getstatusoutput(cmd)  # Compliant
out = subprocess.getoutput(cmd)  # Compliant
os.startfile(path)  # Compliant
os.execl(path, *cmd)  # Compliant
os.execle(path, *cmd, env)  # Compliant
os.execlp(file, *cmd)  # Compliant
os.execlpe(file, *cmd, env)  # Compliant
os.execv(path, cmd)  # Compliant
os.execve(path, cmd, env)  # Compliant
os.execvp(file, cmd)  # Compliant
os.execvpe(file, cmd, env)  # Compliant

Python 2

cmdsargs = ("use", "a", "sequence", "to", "directly", "start", "a", "subprocess")

(_, child_stdout) = os.popen2(cmdsargs)  # Compliant
(_, child_stdout, _) = os.popen3(cmdsargs)  # Compliant
(_, child_stdout) = os.popen4(cmdsargs)  # Compliant

(child_stdout, _) = popen2.popen2(cmdsargs)  # Compliant
(child_stdout, _, _) = popen2.popen3(cmdsargs)  # Compliant
(child_stdout, _) = popen2.popen4(cmdsargs)  # Compliant

See

python:S6463

Allowing unrestricted outbound communications can lead to data leaks.

A restrictive security group is an additional layer of protection that might prevent the abuse or exploitation of a resource. For example, it complicates the exfiltration of data in the case of a successfully exploited vulnerability.

When deciding if outgoing connections should be limited, consider that limiting the connections results in additional administration and maintenance work.

Ask Yourself Whether

  • The resource has access to sensitive data.
  • The resource is part of a private network.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to restrict outgoing connections to a set of trusted destinations.

Sensitive Code Example

For aws_cdk.aws_ec2.SecurityGroup:

from aws_cdk import (
    aws_ec2 as ec2
)

ec2.SecurityGroup(  # Sensitive; allow_all_outbound is enabled by default
    self,
    "example",
    vpc=vpc
)

Compliant Solution

For aws_cdk.aws_ec2.SecurityGroup:

from aws_cdk import (
    aws_ec2 as ec2
)

sg = ec2.SecurityGroup(
    self,
    "example",
    vpc=vpc,
    allow_all_outbound=False
)

sg.add_egress_rule(
    peer=ec2.Peer.ipv4("203.0.113.127/32"),
    connection=ec2.Port.tcp(443)
)

See

python:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sns.Topic:

from aws_cdk import (
    aws_sns as sns
)

class TopicStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sns.Topic( # Sensitive, unencrypted by default
            self,
            "example"
        )

For aws_cdk.aws_sns.CfnTopic:

from aws_cdk import (
    aws_sns as sns
)

class TopicStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sns.CfnTopic( # Sensitive, unencrypted by default
            self,
            "example"
        )

Compliant Solution

For aws_cdk.aws_sns.Topic:

from aws_cdk import (
    aws_sns as sns
)

class TopicStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        my_key = kms.Key(self, "key")
        sns.Topic(
            self,
            "example",
            master_key=my_key
        )

For aws_cdk.aws_sns.CfnTopic:

from aws_cdk import (
    aws_sns as sns
)

class TopicStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        my_key = kms.Key(self, "key")
        sns.CfnTopic(
            self,
            "example",
            kms_master_key_id=my_key.key_id
        )

See

python:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = '192.168.12.42'
sock = socket.socket()
sock.bind((ip, 9090))

Compliant Solution

ip = config.get(section, ipAddress)
sock = socket.socket()
sock.bind((ip, 9090))

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

python:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue on every reference to sys.argv, call to optparse.OptionParser() or a call to argparse.ArgumentParser(). The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

See

python:S4828

Signaling processes or process groups can seriously affect the stability of this application or other applications on the same system.

Accidentally setting an incorrect PID or signal or allowing untrusted sources to assign arbitrary values to these parameters may result in a denial of service.

Also, the system treats the signal differently if the destination PID is less than or equal to 0. This different behavior may affect multiple processes with the same (E)UID simultaneously if the call is left uncontrolled.

Ask Yourself Whether

  • The parameters pid and sig are untrusted (they come from an external source).
  • This function is triggered by non-administrators.
  • Signal handlers on the target processes stop important functions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For stateful applications with user management, ensure that only administrators trigger this code.
  • Verify that the pid and sig parameters are correct before using them.
  • Ensure that the process sending the signals runs with as few OS privileges as possible.
  • Isolate the process on the system based on its (E)UID.
  • Ensure that the signal does not interrupt any essential functions when intercepted by a target’s signal handlers.

Sensitive Code Example

import os

@app.route("/kill-pid/<pid>")
def send_signal(pid):
    os.kill(pid, 9)  # Sensitive

@app.route("/kill-pgid/<pgid>")
def send_signal(pgid):
    os.killpg(pgid, 9)  # Sensitive

Compliant Solution

import os

@app.route("/kill-pid/<pid>")
def send_signal(pid):
    # Validate the untrusted PID,
    # With a pre-approved list or authorization checks
    if is_valid_pid(pid):
        os.kill(pid, 9)

@app.route("/kill-pgid/<pgid>")
def send_signal(pgid):
    # Validate the untrusted PGID,
    # With a pre-approved list or authorization checks
    if is_valid_pgid(pgid):
        os.kill(pgid, 9)

See

python:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

Python 2 and Python 3

import sys
from sys import stdin, __stdin__

# Any reference to sys.stdin or sys.__stdin__ without a method call is Sensitive
sys.stdin  # Sensitive

for line in sys.stdin:  # Sensitive
    print(line)

it = iter(sys.stdin)  # Sensitive
line = next(it)

# Calling the following methods on stdin or __stdin__ is sensitive
sys.stdin.read()  # Sensitive
sys.stdin.readline()  # Sensitive
sys.stdin.readlines()  # Sensitive

# Calling other methods on stdin or __stdin__ does not require a review, thus it is not Sensitive
sys.stdin.seekable()  # Ok
# ...

Python 2 only

raw_input('What is your password?')  # Sensitive

Python 3 only

input('What is your password?')  # Sensitive

Function fileinput.input and class fileinput.FileInput read the standard input when the list of files is empty.

for line in fileinput.input():  # Sensitive
    print(line)

for line in fileinput.FileInput():  # Sensitive
    print(line)

for line in fileinput.input(['setup.py']):  # Ok
    print(line)

for line in fileinput.FileInput(['setup.py']):  # Ok
    print(line)

See

python:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

For aws_cdk.aws_ec2.Instance and similar constructs:

from aws_cdk import aws_ec2 as ec2

ec2.Instance(
    self,
    "vpc_subnet_public",
    instance_type=nano_t2,
    machine_image=ec2.MachineImage.latest_amazon_linux(),
    vpc=vpc,
    vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC) # Sensitive
)

For aws_cdk.aws_ec2.CfnInstance:

from aws_cdk import aws_ec2 as ec2

ec2.CfnInstance(
    self,
    "cfn_public_exposed",
    instance_type="t2.micro",
    image_id="ami-0ea0f26a6d50850c5",
    network_interfaces=[
        ec2.CfnInstance.NetworkInterfaceProperty(
            device_index="0",
            associate_public_ip_address=True, # Sensitive
            delete_on_termination=True,
            subnet_id=vpc.select_subnets(subnet_type=ec2.SubnetType.PUBLIC).subnet_ids[0]
        )
    ]
)

For aws_cdk.aws_dms.CfnReplicationInstance:

from aws_cdk import aws_dms as dms

rep_instance = dms.CfnReplicationInstance(
    self,
    "explicit_public",
    replication_instance_class="dms.t2.micro",
    allocated_storage=5,
    publicly_accessible=True, # Sensitive
    replication_subnet_group_identifier=subnet_group.replication_subnet_group_identifier,
    vpc_security_group_ids=[vpc.vpc_default_security_group]
)

For aws_cdk.aws_rds.CfnDBInstance:

from aws_cdk import aws_rds as rds
from aws_cdk import aws_ec2 as ec2

rds_subnet_group_public = rds.CfnDBSubnetGroup(
    self,
    "public_subnet",
    db_subnet_group_description="Subnets",
    subnet_ids=vpc.select_subnets(
        subnet_type=ec2.SubnetType.PUBLIC
    ).subnet_ids
)

rds.CfnDBInstance(
    self,
    "public-public-subnet",
    engine="postgres",
    master_username="foobar",
    master_user_password="12345678",
    db_instance_class="db.r5.large",
    allocated_storage="200",
    iops=1000,
    db_subnet_group_name=rds_subnet_group_public.ref,
    publicly_accessible=True, # Sensitive
    vpc_security_groups=[sg.security_group_id]
)

Compliant Solution

For aws_cdk.aws_ec2.Instance:

from aws_cdk import aws_ec2 as ec2

ec2.Instance(
    self,
    "vpc_subnet_private",
    instance_type=nano_t2,
    machine_image=ec2.MachineImage.latest_amazon_linux(),
    vpc=vpc,
    vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT)
)

For aws_cdk.aws_ec2.CfnInstance:

from aws_cdk import aws_ec2 as ec2

ec2.CfnInstance(
    self,
    "cfn_private",
    instance_type="t2.micro",
    image_id="ami-0ea0f26a6d50850c5",
    network_interfaces=[
        ec2.CfnInstance.NetworkInterfaceProperty(
            device_index="0",
            associate_public_ip_address=False, # Compliant
            delete_on_termination=True,
            subnet_id=vpc.select_subnets(subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT).subnet_ids[0]
        )
    ]
)

For aws_cdk.aws_dms.CfnReplicationInstance:

from aws_cdk import aws_dms as dms

rep_instance = dms.CfnReplicationInstance(
    self,
    "explicit_private",
    replication_instance_class="dms.t2.micro",
    allocated_storage=5,
    publicly_accessible=False,
    replication_subnet_group_identifier=subnet_group.replication_subnet_group_identifier,
    vpc_security_group_ids=[vpc.vpc_default_security_group]
)

For aws_cdk.aws_rds.CfnDBInstance:

from aws_cdk import aws_rds as rds
from aws_cdk import aws_ec2 as ec2

rds_subnet_group_private = rds.CfnDBSubnetGroup(
    self,
    "private_subnet",
    db_subnet_group_description="Subnets",
    subnet_ids=vpc.select_subnets(
        subnet_type=ec2.SubnetType.PRIVATE_WITH_NAT
    ).subnet_ids
)

rds.CfnDBInstance(
    self,
    "private-private-subnet",
    engine="postgres",
    master_username="foobar",
    master_user_password="12345678",
    db_instance_class="db.r5.large",
    allocated_storage="200",
    iops=1000,
    db_subnet_group_name=rds_subnet_group_private.ref,
    publicly_accessible=False,
    vpc_security_groups=[sg.security_group_id]
)

See

python:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Python Standard Library

Code examples

The following code contains examples of disabled certificate validation.

Certificate validation is not enabled by default when _create_unverified_context is used. It is recommended to use _create_default_https_context instead to create a secure context that validates certificates.

Noncompliant code example

import ssl

ctx1 = ssl._create_unverified_context() # Noncompliant
ctx2 = ssl._create_stdlib_context() # Noncompliant

ctx3 = ssl.create_default_context()
ctx3.verify_mode = ssl.CERT_NONE # Noncompliant

Compliant solution

import ssl

ctx = ssl.create_default_context()
ctx.verify_mode = ssl.CERT_REQUIRED

# By default, certificate validation is enabled
ctx = ssl._create_default_https_context()

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

python:S6321

Why is this an issue?

Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

For aws_cdk.aws_ec2.Instance and other constructs that support a connections attribute:

from aws_cdk import aws_ec2 as ec2

instance = ec2.Instance(
    self,
    "my_instance",
    instance_type=nano_t2,
    machine_image=ec2.MachineImage.latest_amazon_linux(),
    vpc=vpc
)

instance.connections.allow_from(
    ec2.Peer.any_ipv4(), # Noncompliant
    ec2.Port.tcp(22),
    description="Allows SSH from all IPv4"
)
instance.connections.allow_from_any_ipv4( # Noncompliant
    ec2.Port.tcp(3389),
    description="Allows Terminal Server from all IPv4"
)

For aws_cdk.aws_ec2.SecurityGroup

from aws_cdk import aws_ec2 as ec2
security_group = ec2.SecurityGroup(
    self,
    "custom-security-group",
    vpc=vpc
)

security_group.add_ingress_rule(
    ec2.Peer.any_ipv4(), # Noncompliant
    ec2.Port.tcp_range(1, 1024)
)

For aws_cdk.aws_ec2.CfnSecurityGroup

from aws_cdk import aws_ec2 as ec2

ec2.CfnSecurityGroup(
    self,
    "cfn-based-security-group",
    group_description="cfn based security group",
    group_name="cfn-based-security-group",
    vpc_id=vpc.vpc_id,
    security_group_ingress=[
        ec2.CfnSecurityGroup.IngressProperty( # Noncompliant
            ip_protocol="6",
            cidr_ip="0.0.0.0/0",
            from_port=22,
            to_port=22
        ),
        ec2.CfnSecurityGroup.IngressProperty( # Noncompliant
            ip_protocol="tcp",
            cidr_ip="0.0.0.0/0",
            from_port=3389,
            to_port=3389
        ),
        { # Noncompliant
            "ipProtocol":"-1",
            "cidrIpv6":"::/0"
        }
    ]
)

For aws_cdk.aws_ec2.CfnSecurityGroupIngress

from aws_cdk import aws_ec2 as ec2

ec2.CfnSecurityGroupIngress( # Noncompliant
    self,
    "ingress-all-ip-tcp-ssh",
    ip_protocol="tcp",
    cidr_ip="0.0.0.0/0",
    from_port=22,
    to_port=22,
    group_id=security_group.attr_group_id
)

ec2.CfnSecurityGroupIngress( # Noncompliant
    self,
    "ingress-all-ipv6-all-tcp",
    ip_protocol="-1",
    cidr_ipv6="::/0",
    group_id=security_group.attr_group_id
)

Compliant solution

For aws_cdk.aws_ec2.Instance and other constructs that support a connections attribute:

from aws_cdk import aws_ec2 as ec2

instance = ec2.Instance(
    self,
    "my_instance",
    instance_type=nano_t2,
    machine_image=ec2.MachineImage.latest_amazon_linux(),
    vpc=vpc
)

instance.connections.allow_from_any_ipv4(
    ec2.Port.tcp(1234),
    description="Allows 1234 from all IPv4"
)

instance.connections.allow_from(
    ec2.Peer.ipv4("192.0.2.0/24"),
    ec2.Port.tcp(22),
    description="Allows SSH from all IPv4"
)

For aws_cdk.aws_ec2.SecurityGroup

from aws_cdk import aws_ec2 as ec2
security_group = ec2.SecurityGroup(
    self,
    "custom-security-group",
    vpc=vpc
)

security_group.add_ingress_rule(
    ec2.Peer.any_ipv4(),
    ec2.Port.tcp_range(1024, 1048)
)

For aws_cdk.aws_ec2.CfnSecurityGroup

from aws_cdk import aws_ec2 as ec2

ec2.CfnSecurityGroup(
    self,
    "cfn-based-security-group",
    group_description="cfn based security group",
    group_name="cfn-based-security-group",
    vpc_id=vpc.vpc_id,
    security_group_ingress=[
        ec2.CfnSecurityGroup.IngressProperty(
            ip_protocol="tcp",
            cidr_ip="0.0.0.0/0",
            from_port=1024,
            to_port=1048
        ),
        {
            "ipProtocol":"6",
            "cidrIp":"192.0.2.0/24",
            "fromPort":22,
            "toPort":22
        }
    ]
)

For aws_cdk.aws_ec2.CfnSecurityGroupIngress

from aws_cdk import aws_ec2 as ec2

ec2.CfnSecurityGroupIngress(
    self,
    "ingress-all-ipv4-tcp-http",
    ip_protocol="6",
    cidr_ip="0.0.0.0/0",
    from_port=80,
    to_port=80,
    group_id=security_group.attr_group_id
)

ec2.CfnSecurityGroupIngress(
    self,
    "ingress-range-tcp-rdp",
    ip_protocol="tcp",
    cidr_ip="192.0.2.0/24",
    from_port=3389,
    to_port=3389,
    group_id=security_group.attr_group_id
)

Resources

Documentation

Standards

python:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

For aws_cdk.aws_apigateway.Resource:

from aws_cdk import (
    aws_apigateway as apigateway
)

resource = api.root.add_resource("example")
resource.add_method(
    "GET",
    authorization_type=apigateway.AuthorizationType.NONE  # Sensitive
)

For aws_cdk.aws_apigatewayv2.CfnRoute:

from aws_cdk import (
    aws_apigatewayv2 as apigateway
)

apigateway.CfnRoute(
    self,
    "no-auth",
    api_id=api.ref,
    route_key="GET /test",
    authorization_type="NONE"  # Sensitive
)

Compliant Solution

For aws_cdk.aws_apigateway.Resource:

from aws_cdk import (
    aws_apigateway as apigateway
)

opts = apigateway.MethodOptions(
    authorization_type=apigateway.AuthorizationType.IAM
)
resource = api.root.add_resource(
    "example",
    default_method_options=opts
)
resource.add_method(
    "POST",
    authorization_type=apigateway.AuthorizationType.IAM
)
resource.add_method(  # authorization_type is inherited from the Resource's configured default_method_options
    "POST"
)

For aws_cdk.aws_apigatewayv2.CfnRoute:

from aws_cdk import (
    aws_apigatewayv2 as apigateway
)

apigateway.CfnRoute(
    self,
    "auth",
    api_id=api.ref,
    route_key="GET /test",
    authorization_type="AWS_IAM"
)

See

python:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

Flask

from flask import Response

@app.route('/')
def index():
    response = Response()
    response.set_cookie('key', 'value') # Sensitive
    return response

Compliant Solution

Flask

from flask import Response

@app.route('/')
def index():
    response = Response()
    response.set_cookie('key', 'value', secure=True) # Compliant
    return response

See

python:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

Django:

CORS_ORIGIN_ALLOW_ALL = True # Sensitive

Flask:

from flask import Flask
from flask_cors import CORS

app = Flask(__name__)
CORS(app, resources={r"/*": {"origins": "*", "send_wildcard": "True"}}) # Sensitive

User-controlled origin:

origin = request.headers['ORIGIN']
resp = Response()
resp.headers['Access-Control-Allow-Origin'] = origin # Sensitive

Compliant Solution

Django:

CORS_ORIGIN_ALLOW_ALL = False # Compliant

Flask:

from flask import Flask
from flask_cors import CORS

app = Flask(__name__)
CORS(app, resources={r"/*": {"origins": "*", "send_wildcard": "False"}}) # Compliant

User-controlled origin validated with an allow-list:

origin = request.headers['ORIGIN']
resp = Response()
if origin in TRUSTED_ORIGINS:
   resp.headers['Access-Control-Allow-Origin'] = origin

See

python:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

from jinja2 import Environment

env = Environment() # Sensitive: New Jinja2 Environment has autoescape set to false
env = Environment(autoescape=False) # Sensitive:

Compliant Solution

from jinja2 import Environment
env = Environment(autoescape=True) # Compliant

See

python:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sqs.Queue:

from aws_cdk import (
    aws_sqs as sqs
)

class QueueStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sqs.Queue( # Sensitive, unencrypted by default
            self,
            "example"
        )

For aws_cdk.aws_sqs.CfnQueue:

from aws_cdk import (
    aws_sqs as sqs
)

class CfnQueueStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sqs.CfnQueue( # Sensitive, unencrypted by default
            self,
            "example"
        )

Compliant Solution

For aws_cdk.aws_sqs.Queue:

from aws_cdk import (
    aws_sqs as sqs
)

class QueueStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        sqs.Queue(
            self,
            "example",
            encryption=sqs.QueueEncryption.KMS_MANAGED
        )

For aws_cdk.aws_sqs.CfnQueue:

from aws_cdk import (
    aws_sqs as sqs
)

class CfnQueueStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)
        my_key = kms.Key(self, "key")
        sqs.CfnQueue(
            self,
            "example",
            kms_master_key_id=my_key.key_id
        )

See

python:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_efs.FileSystem and aws_cdk.aws_efs.CfnFileSystem:

from aws_cdk import (
    aws_efs as efs
)

efs.FileSystem(
    self,
    "example",
    encrypted=False  # Sensitive
)

Compliant Solution

For aws_cdk.aws_efs.FileSystem and aws_cdk.aws_efs.CfnFileSystem:

from aws_cdk import (
    aws_efs as efs
)

efs.FileSystem(
    self,
    "example",
    encrypted=True
)

See

cloudformation:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in API Gateway

Code examples

These code samples illustrate how to fix this issue in both APIGateway and ApiGatewayV2.

Noncompliant code example

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  CustomApi:
    Type: AWS::ApiGateway::DomainName
    Properties:
      SecurityPolicy: "TLS_1_0"  # Noncompliant

The ApiGatewayV2 uses a weak TLS version by default:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  CustomApi: # Noncompliant
    Type: AWS::ApiGatewayV2::DomainName

Compliant solution

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  CustomApi:
    Type: AWS::ApiGateway::DomainName
    Properties:
      SecurityPolicy: "TLS_1_2"
AWSTemplateFormatVersion: '2010-09-09'
Resources:
  CustomApi:
    Type: AWS::ApiGatewayV2::DomainName
    Properties:
      DomainNameConfigurations:
        - SecurityPolicy: "TLS_1_2"

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

cloudformation:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

Update permission is granted for all policies using the wildcard (*) in the Resource property:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExamplePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
        PolicyDocument:
            Version: "2012-10-17"
            Statement:
                - Effect: Allow
                  Action:
                    - "iam:CreatePolicyVersion"
                  Resource:
                    - "*" # Sensitive
        Roles:
            - !Ref MyRole

Compliant Solution

Restrict update permission to the appropriate subset of policies:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExamplePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
        PolicyDocument:
            Version: "2012-10-17"
            Statement:
                - Effect: Allow
                  Action:
                    - "iam:CreatePolicyVersion"
                  Resource:
                    - !Sub "arn:aws:iam::${AWS::AccountId}:policy/team1/*"
        Roles:
            - !Ref MyRole

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

cloudformation:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::SNS::Topic:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Topic:  # Sensitive, encryption disabled by default
    Type: AWS::SNS::Topic
    Properties:
      DisplayName: "unencrypted_topic"

Compliant Solution

For AWS::SNS::Topic:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Topic:
    Type: AWS::SNS::Topic
    Properties:
      DisplayName: "encrypted_topic"
      KmsMasterKeyId:
        Fn::GetAtt:
          - TestKey
          - KeyId

See

cloudformation:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

For AWS Kinesis Data Streams, server-side encryption is disabled by default:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  KinesisStream: # Sensitive
    Type: AWS::Kinesis::Stream
    Properties:
      ShardCount: 1
      # No StreamEncryption

For Amazon ElastiCache:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Example:
    Type: AWS::ElastiCache::ReplicationGroup
    Properties:
      ReplicationGroupId: "example"
      TransitEncryptionEnabled: false  # Sensitive

For Amazon ECS:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  EcsTask:
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: "service"
      Volumes:
        -
          Name: "storage"
          EFSVolumeConfiguration:
            FilesystemId: !Ref FS
            TransitEncryption: "DISABLED"  # Sensitive

For AWS Load Balancer Listeners:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  HTTPlistener:
   Type: "AWS::ElasticLoadBalancingV2::Listener"
   Properties:
     DefaultActions:
       - Type: "redirect"
         RedirectConfig:
           Protocol: "HTTP"
     Protocol: "HTTP" # Sensitive

For Amazon OpenSearch domains:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Example:
    Type: AWS::OpenSearchService::Domain
    Properties:
      DomainName: example
      DomainEndpointOptions:
        EnforceHTTPS: false # Sensitive
      NodeToNodeEncryptionOptions:
        Enabled: false # Sensitive

For Amazon MSK communications between clients and brokers:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  MSKCluster:
    Type: 'AWS::MSK::Cluster'
    Properties:
      ClusterName: MSKCluster
      EncryptionInfo:
        EncryptionInTransit:
          ClientBroker: TLS_PLAINTEXT # Sensitive
          InCluster: false # Sensitive

Compliant Solution

For AWS Kinesis Data Streams server-side encryption:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  KinesisStream:
    Type: AWS::Kinesis::Stream
    Properties:
      ShardCount: 1
      StreamEncryption:
         EncryptionType: KMS

For Amazon ElastiCache:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Example:
    Type: AWS::ElastiCache::ReplicationGroup
    Properties:
      ReplicationGroupId: "example"
      TransitEncryptionEnabled: true

For Amazon ECS:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  EcsTask:
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: "service"
      Volumes:
        -
          Name: "storage"
          EFSVolumeConfiguration:
            FilesystemId: !Ref FS
            TransitEncryption: "ENABLED"

For AWS Load Balancer Listeners:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  HTTPlistener:
   Type: "AWS::ElasticLoadBalancingV2::Listener"
   Properties:
     DefaultActions:
       - Type: "redirect"
         RedirectConfig:
           Protocol: "HTTPS"
     Protocol: "HTTP"

For Amazon OpenSearch domains:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Example:
    Type: AWS::OpenSearchService::Domain
    Properties:
      DomainName: example
      DomainEndpointOptions:
        EnforceHTTPS: true
      NodeToNodeEncryptionOptions:
        Enabled: true

For Amazon MSK communications between clients and brokers, data in transit is encrypted by default, allowing you to omit writing the EncryptionInTransit configuration. However, if you need to configure it explicitly, this configuration is compliant:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  MSKCluster:
    Type: 'AWS::MSK::Cluster'
    Properties:
      ClusterName: MSKCluster
      EncryptionInfo:
        EncryptionInTransit:
          ClientBroker: TLS
          InCluster: true

See

cloudformation:S6245

This rule is deprecated, and will eventually be removed.

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive

Compliant Solution

Server-side encryption with Amazon S3-Managed Keys is used:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256

See

cloudformation:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to deny all HTTP requests:

  • for all objects (*) of the bucket
  • for all principals (*)
  • for all actions (*)

Sensitive Code Example

No secure policy is attached to this S3 bucket:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive

A policy is defined but forces only HTTPs communication for some users:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "mynoncompliantbucket"

  S3BucketPolicy:
    Type: 'AWS::S3::BucketPolicy'
    Properties:
      Bucket: !Ref S3Bucket
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Deny
            Principal:
              AWS: # Sensitive: only one principal is forced to use https
                - 'arn:aws:iam::123456789123:root'
            Action: "*"
            Resource: arn:aws:s3:::mynoncompliantbuckets6249/*
            Condition:
              Bool:
                "aws:SecureTransport": false

Compliant Solution

A secure policy that denies the use of all HTTP requests:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketName: "mycompliantbucket"

  S3BucketPolicy:
    Type: 'AWS::S3::BucketPolicy'
    Properties:
      Bucket: "mycompliantbucket"
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Deny
            Principal:
              AWS: "*" # all principals should use https
            Action: "*" # for any actions
            Resource: arn:aws:s3:::mycompliantbucket/* # for any resources
            Condition:
              Bool:
                "aws:SecureTransport": false

See

cloudformation:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

DMS and EC2 instances have a public IP address assigned to them:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  DMSInstance:
    Type: AWS::DMS::ReplicationInstance
    Properties:
      PubliclyAccessible: true # sensitive, by default it's also set to true

  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      NetworkInterfaces:
        - AssociatePublicIpAddress: true # sensitive, by default it's also set to true
          DeviceIndex: "0"

Compliant Solution

DMS and EC2 instances doesn’t have a public IP address:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  DMSInstance:
    Type: AWS::DMS::ReplicationInstance
    Properties:
      PubliclyAccessible: false

  EC2Instance:
    Type: AWS::EC2::Instance
    Properties:
      NetworkInterfaces:
        - AssociatePublicIpAddress: false
          DeviceIndex: "0"

See

cloudformation:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PublicRead, PublicReadWrite grant respectively "read" and "read and write" privileges to everyone in the world (AllUsers group).
  • AuthenticatedRead grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, ie to grant necessary permissions only to users for their required tasks. In the context of canned ACL, set it to private (the default one) and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users (ie: anyone in the world authenticated or not) have read and write permissions with the PublicReadWrite access control:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "mynoncompliantbucket"
      AccessControl: "PublicReadWrite"

Compliant Solution

With the private access control (default), only the bucket owner has the read/write permissions on the buckets and its ACL.

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketName: "mycompliantbucket"
      AccessControl: "Private"

See

cloudformation:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies to be set on a S3 bucket the following settings can be configured:

  • BlockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • IgnorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • BlockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • RestrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • BlockPublicAcls to true to block new attempts to set public ACLs.
  • IgnorePublicAcls to true to block existing public ACLs.
  • BlockPublicPolicy to true to block new attempts to set public policies.
  • RestrictPublicBuckets to true to restrict existing public policies.

Sensitive Code Example

By default, when not set, the PublicAccessBlockConfiguration is fully deactivated (nothing is blocked):

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucketdefault:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "example"

This PublicAccessBlockConfiguration allows public ACL to be set:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "example"
      PublicAccessBlockConfiguration:
        BlockPublicAcls: false # should be true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true

Compliant Solution

This PublicAccessBlockConfiguration blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketName: "example"
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true

See

cloudformation:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy that grants all permissions by using the wildcard (*) in the Action property:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExamplePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
        PolicyDocument:
            Version: "2012-10-17"
            Statement:
                - Effect: Allow
                  Action:
                    - "*" # Sensitive
                  Resource:
                    - !Ref MyResource
        Roles:
            - !Ref MyRole

Compliant Solution

A customer-managed policy that grants only the required permissions:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExamplePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
        PolicyDocument:
            Version: "2012-10-17"
            Statement:
                - Effect: Allow
                  Action:
                    - "s3:GetObject"
                  Resource:
                    - !Ref MyResource
        Roles:
            - !Ref MyRole

See

cloudformation:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For AWS::RDS::DBInstance and AWS::RDS::DBCluster:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  DatabaseInstance:
    Type: AWS::RDS::DBInstance
    Properties:
      StorageEncrypted: false  # Sensitive, disabled by default
  DatabaseCluster:
    Type: AWS::RDS:DBCluster
    Properties:
      StorageEncrypted: false  # Sensitive, disabled by default

Compliant Solution

For AWS::RDS::DBInstance and AWS::RDS::DBCluster:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  DatabaseInstance:
    Type: AWS::RDS::DBInstance
    Properties:
      StorageEncrypted: true
  DatabaseCluster:
    Type: AWS::RDS:DBCluster
    Properties:
      StorageEncrypted: false  # Sensitive, disabled by default

See

cloudformation:S6308

Amazon Elasticsearch Service (ES) is a managed service to host Elasticsearch instances.

To harden domain (cluster) data in case of unauthorized access, ES provides data-at-rest encryption if the Elasticsearch version is 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, if adversaries gain physical access to the storage medium, they cannot access the data.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt Elasticsearch domains that contain sensitive information.

Encryption and decryption are handled transparently by ES, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::Elasticsearch::Domain:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Elasticsearch:
    Type: AWS::Elasticsearch::Domain
    Properties:
      EncryptionAtRestOptions:
        Enabled: false  # Sensitive, disabled by default

Compliant Solution

For AWS::Elasticsearch::Domain:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Elasticsearch:
    Type: AWS::Elasticsearch::Domain
    Properties:
      EncryptionAtRestOptions:
        Enabled: true

See

cloudformation:S6317

Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access.

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

For such policies, it is easy to define very broad permissions (by using wildcard "*" permissions for example.) This is especially true if it is not yet clear which permissions will be required for a specific workload or use case. However, it is important to limit the amount of permissions that are granted and the amount of resources to which these permissions are granted. Doing so ensures that there are no users or roles that have more permissions than they need.

If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities.

What is the potential impact?

AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope.

Privilege escalation

When IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities.

For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account.

How to fix it in Identity and Access Management

Code examples

In this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

Noncompliant code example

AWSTemplateFormatVersion: 2010-09-09

Resources:
  # Update Lambda code
  lambdaUpdatePolicy:
    # Noncompliant
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: lambdaUpdatePolicy
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action:
              - lambda:UpdateFunctionCode
            Resource: "*"

Compliant solution

The policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed.

AWSTemplateFormatVersion: 2010-09-09

Resources:
  # Update Lambda code
  lambdaUpdatePolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: lambdaUpdatePolicy
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action:
              - lambda:UpdateFunctionCode
            Resource: "arn:aws:lambda:us-east-2:123456789012:function:my-function:1"

How does this work?

Principle of least privilege

When creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else.

To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used.

Resources

Documentation

Articles & blog posts

Standards

cloudformation:S6321

Why is this an issue?

Cloud platforms such as AWS support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      VpcId: !Ref ExampleVpc
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 22
          ToPort: 22 # SSH traffic
          CidrIp: "0.0.0.0/0" # from all IP addresses is authorized

Compliant solution

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      VpcId: !Ref ExampleVpc
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 22
          ToPort: 22
          CidrIp: "1.2.3.0/24"

Resources

Documentation

Standards

cloudformation:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

A public API that doesn’t have access control implemented:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      AuthorizationType: NONE # Sensitive
      HttpMethod: GET

A Serverless Application Model (SAM) API resource that is public by default:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleApi: # Sensitive
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod

Compliant Solution

An API that implements AWS IAM permissions:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      AuthorizationType: AWS_IAM
      HttpMethod: GET

A Serverless Application Model (SAM) API resource that has to be requested using a key:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ExampleApi:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Prod
      Auth:
        ApiKeyRequired: true

See

cloudformation:S6364

Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident.

Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident.

Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be backed up for a specific amount of time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident.

Sensitive Code Example

For Amazon Relational Database Service clusters and instances:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  relationaldatabase:
    Type: 'AWS::RDS::DBInstance'
    Properties:
      DBName: NonCompliantDatabase
      BackupRetentionPeriod: 2 # Sensitive

Compliant Solution

For Amazon Relational Database Service clusters and instances:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  relationaldatabase:
    Type: 'AWS::RDS::DBInstance'
    Properties:
      DBName: CompliantDatabase
      BackupRetentionPeriod: 5
cloudformation:S6252

S3 buckets can be in three states related to versioning:

  • unversioned (default one)
  • enabled
  • suspended

When the S3 bucket is unversioned or has versioning suspended it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

Versioning is disabled by default:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "Example"

Compliant Solution

Versioning is enabled:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Compliant
    Properties:
      BucketName: "Example"
      VersioningConfiguration:
        Status: Enabled

See

cloudformation:S6258

Disabling logging of this component can lead to missing traceability in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be monitored.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable the logging capabilities of this component. Depending on the component, new permissions might be required by the logging storage components.
You should consult the official documentation to enable logging for the impacted components. For example, AWS Application Load Balancer Access Logs require an additional bucket policy.

Sensitive Code Example

For Amazon S3 access requests:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3Bucket:
    Type: 'AWS::S3::Bucket' # Sensitive
    Properties:
      BucketName: "mynoncompliantbucket"

For Amazon API Gateway stages:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Prod: # Sensitive
    Type: AWS::ApiGateway::Stage
    Properties:
      StageName: Prod
      Description: Prod Stage
      TracingEnabled: false # Sensitive

For Amazon Neptune clusters:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Cluster:
    Type: AWS::Neptune::DBCluster
    Properties:
      EnableCloudwatchLogsExports: []  # Sensitive

For Amazon MSK broker logs:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  SensitiveCluster:
    Type: 'AWS::MSK::Cluster'
    Properties:
      ClusterName: Sensitive Cluster
      LoggingInfo:
        BrokerLogs: # Sensitive
          CloudWatchLogs:
            Enabled: false
            LogGroup: CWLG
          Firehose:
            DeliveryStream: DS
            Enabled: false

For Amazon DocDB:

AWSTemplateFormatVersion: "2010-09-09"
Resources:
  DocDBOmittingLogs: # Sensitive
    Type: "AWS::DocDB::DBCluster"
    Properties:
      DBClusterIdentifier : "DB Without Logs"

For Amazon MQ:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Broker:
    Type: AWS::AmazonMQ::Broker
    Properties:
      Logs:  # Sensitive
        Audit: false
        General: false

For Amazon Redshift:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ClusterOmittingLogging: # Sensitive
    Type: "AWS::Redshift::Cluster"
    Properties:
      DBName: "Redshift Warehouse Cluster"

For Amazon OpenSearch service or Amazon Elasticsearch service:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  OpenSearchServiceDomain:
    Type: 'AWS::OpenSearchService::Domain'
    Properties:
      LogPublishingOptions: # Sensitive
        ES_APPLICATION_LOGS:
          CloudWatchLogsLogGroupArn: 'arn:aws:logs:us-east-1:1234:log-group:es-application-logs'
          Enabled: true
        INDEX_SLOW_LOGS:
          CloudWatchLogsLogGroupArn: 'arn:aws:logs:us-east-1:1234:log-group:es-index-slow-logs'
          Enabled: true

For Amazon CloudFront distributions:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  CloudFrontDistribution: # Sensitive
    Type: AWS::CloudFront::Distribution
    Properties:
      DistributionConfig:
        DefaultRootObject: "index.html"

For Amazon Elastic Load Balancing:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  LoadBalancer:
      Type: AWS::ElasticLoadBalancing::LoadBalancer
      Properties:
        AccessLoggingPolicy:
          Enabled: false # Sensitive

For Amazon Load Balancing (v2):

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ApplicationLoadBalancer:
   Type: AWS::ElasticLoadBalancingV2::LoadBalancer
   Properties:
     Name: CompliantLoadBalancer
     LoadBalancerAttributes:
       - Key: "access_logs.s3.enabled"
         Value: false # Sensitive

Compliant Solution

For Amazon S3 access requests:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3BucketLogs:
    Type: 'AWS::S3::Bucket'
    Properties:
      BucketName: "mycompliantloggingbucket"
      AccessControl: LogDeliveryWrite

  S3Bucket:
    Type: 'AWS::S3::Bucket'
    Properties:
      BucketName: "mycompliantbucket"
      LoggingConfiguration:
        DestinationBucketName: !Ref S3BucketLogs
        LogFilePrefix: testing-logs

For Amazon API Gateway stages:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Prod:
    Type: AWS::ApiGateway::Stage
    Properties:
      StageName: Prod
      Description: Prod Stage
      TracingEnabled: true
      AccessLogSetting:
        DestinationArn: "arn:aws:logs:eu-west-1:123456789:test"
        Format: "..."

For Amazon Neptune clusters:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Cluster:
    Type: AWS::Neptune::DBCluster
    Properties:
      EnableCloudwatchLogsExports: ["audit"]

For Amazon MSK broker logs:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  SensitiveCluster:
    Type: 'AWS::MSK::Cluster'
    Properties:
      ClusterName: Sensitive Cluster
      LoggingInfo:
        BrokerLogs:
          Firehose:
            DeliveryStream: DS
            Enabled: true
          S3:
            Bucket: Broker Logs
            Enabled: true
            Prefix: "logs/msk-brokers-"

For Amazon DocDB:

AWSTemplateFormatVersion: "2010-09-09"
Resources:
  DocDBWithLogs:
    Type: "AWS::DocDB::DBCluster"
    Properties:
      DBClusterIdentifier : "DB With Logs"
      EnableCloudwatchLogsExports:
         - audit

For Amazon MQ enable Audit or General:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  Broker:
    Type: AWS::AmazonMQ::Broker
    Properties:
      Logs:
        Audit: true
        General: true

For Amazon Redshift:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  CompliantCluster:
    Type: "AWS::Redshift::Cluster"
    Properties:
      DBName: "Redshift Warehouse Cluster"
      LoggingProperties:
        BucketName: "Infra Logs"
        S3KeyPrefix: "log/redshift-"

For Amazon OpenSearch service, or Amazon Elasticsearch service:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  OpenSearchServiceDomain:
    Type: 'AWS::OpenSearchService::Domain'
    Properties:
      LogPublishingOptions:
        AUDIT_LOGS:
          CloudWatchLogsLogGroupArn: 'arn:aws:logs:us-east-1:1234:log-group:es-audit-logs'
          Enabled: true

For Amazon CloudFront distributions:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  CloudFrontDistribution:
    Type: AWS::CloudFront::Distribution
    Properties:
      DistributionConfig:
        DefaultRootObject: "index.html"
        Logging:
          Bucket: "mycompliantbucket"
          Prefix: "log/cloudfront-"

For Amazon Elastic Load Balancing:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  LoadBalancer:
      Type: AWS::ElasticLoadBalancing::LoadBalancer
      Properties:
        AccessLoggingPolicy:
          Enabled: true
          S3BucketName: mycompliantbucket
          S3BucketPrefix: "log/loadbalancer-"

For Amazon Load Balancing (v2):

AWSTemplateFormatVersion: 2010-09-09
Resources:
  ApplicationLoadBalancer:
   Type: AWS::ElasticLoadBalancingV2::LoadBalancer
   Properties:
     Name: CompliantLoadBalancer
     LoadBalancerAttributes:
       - Key: "access_logs.s3.enabled"
         Value: true
       - Key: "access_logs.s3.bucket"
         Value: "mycompliantbucket"
       - Key: "access_logs.s3.prefix"
         Value: "log/elbv2-"

See

cloudformation:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3BucketPolicy:
    Type: 'AWS::S3::BucketPolicy' # Sensitive
    Properties:
      Bucket: !Ref S3Bucket
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              AWS: "*" # all principals / anonymous access
            Action: "s3:PutObject" # can put object
            Resource: arn:aws:s3:::mybucket/*

Compliant Solution

This policy allows only the authorized users:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  S3BucketPolicy:
    Type: 'AWS::S3::BucketPolicy' # Compliant
    Properties:
      Bucket: !Ref S3Bucket
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              AWS:
                - !Sub 'arn:aws:iam::${AWS::AccountId}:root' # only this principal
            Action: "s3:PutObject" # can put object
            Resource: arn:aws:s3:::mybucket/*

See

cloudformation:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For AWS::EC2::Volume:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Ec2Volume:
    Type: AWS::EC2::Volume
    Properties:
      Encrypted: false  # Sensitive
AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Ec2Volume:
    Type: AWS::EC2::Volume  # Sensitive as encryption is disabled by default

Compliant Solution

For AWS::EC2::Volume:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Ec2Volume:
    Type: AWS::EC2::Volume
    Properties:
      Encrypted: true

See

cloudformation:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::SageMaker::NotebookInstance:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Notebook:  # Sensitive, encryption disabled by default
    Type: AWS::SageMaker::NotebookInstance

Compliant Solution

For AWS::SageMaker::NotebookInstance:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Notebook:
    Type: AWS::SageMaker::NotebookInstance
    Properties:
      KmsKeyId:
        Fn::GetAtt:
          - SomeKey
          - KeyId

See

cloudformation:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::SQS::Queue:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Queue:  # Sensitive, encryption disabled by default
    Type: AWS::SQS::Queue
    Properties:
      DisplayName: "unencrypted_queue"

Compliant Solution

For AWS::SQS::Queue:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Queue:
    Type: AWS::SQS::Queue
    Properties:
      DisplayName: "encrypted_queue"
      KmsMasterKeyId:
        Fn::GetAtt:
          - TestKey
          - KeyId

See

cloudformation:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For AWS::EFS::FileSystem:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Fs:  # Sensitive, encryption disabled by default
    Type: AWS::EFS::FileSystem

Compliant Solution

For AWS::EFS::FileSystem:

AWSTemplateFormatVersion: '2010-09-09'
Resources:
  Fs:
    Type: AWS::EFS::FileSystem
    Properties:
      Encrypted: true

See

vbnet:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

Noncompliant code example

Imports System.IO
Imports System.Security.Cryptography

Public Sub Encrypt(key As Byte(), dataToEncrypt As Byte(), target As MemoryStream)
    Dim aes = New AesCryptoServiceProvider()

    Dim iv = New Byte() {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}
    Dim encryptor = aes.CreateEncryptor(key, iv) ' Noncompliant

    Dim cryptoStream = New CryptoStream(target, encryptor, CryptoStreamMode.Write)
    Dim swEncrypt = New StreamWriter(cryptoStream)

    swEncrypt.Write(dataToEncrypt)
End Sub

Compliant solution

In this example, the code implicitly uses a number generator that is considered strong, thanks to aes.IV.

Imports System.IO
Imports System.Security.Cryptography

Public Sub Encrypt(key As Byte(), dataToEncrypt As Byte(), target As MemoryStream)
    Dim aes = New AesCryptoServiceProvider()

    Dim encryptor = aes.CreateEncryptor(key, aes.IV)

    Dim cryptoStream = New CryptoStream(target, encryptor, CryptoStreamMode.Write)
    Dim swEncrypt = New StreamWriter(cryptoStream)

    swEncrypt.Write(dataToEncrypt)
End Sub

How does this work?

Use unique IVs

To ensure high security, initialization vectors must meet two important criteria:

  • IVs must be unique for each encryption operation.
  • For CBC and CFB modes, a secure FIPS-compliant random number generator should be used to generate unpredictable IVs.

The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

vbnet:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers.

The .Net Core framework offers multiple features which help during debug. Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDeveloperExceptionPage and Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDatabaseErrorPage are two of them. Make sure that those features are disabled in production.

Use If env.IsDevelopment() to disable debug code.

Sensitive Code Example

This rule raises issues when the following .Net Core methods are called: Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDeveloperExceptionPage, Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDatabaseErrorPage.

Imports Microsoft.AspNetCore.Builder
Imports Microsoft.AspNetCore.Hosting

Namespace MyMvcApp
    Public Class Startup
        Public Sub Configure(ByVal app As IApplicationBuilder, ByVal env As IHostingEnvironment)
            ' Those calls are Sensitive because it seems that they will run in production
            app.UseDeveloperExceptionPage() 'Sensitive
            app.UseDatabaseErrorPage() 'Sensitive
        End Sub
    End Class
End Namespace

Compliant Solution

Imports Microsoft.AspNetCore.Builder
Imports Microsoft.AspNetCore.Hosting

Namespace MyMvcApp
    Public Class Startup
        Public Sub Configure(ByVal app As IApplicationBuilder, ByVal env As IHostingEnvironment)
            If env.IsDevelopment() Then ' Compliant
                ' The following calls are ok because they are disabled in production
                app.UseDeveloperExceptionPage()
                app.UseDatabaseErrorPage()
            End If
        End Sub
    End Class
End Namespace

See

vbnet:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For Each entry As ZipArchiveEntry in archive.Entries
    ' entry.FullName could contain parent directory references ".." and the destinationPath variable could become outside of the desired path
    string destinationPath = Path.GetFullPath(Path.Combine(path, entry.FullName))
    entry.ExtractToFile(destinationPath) ' Sensitive, extracts the entry to a file

    Dim stream As Stream
    stream = entry.Open() ' Sensitive, the entry is about to be extracted
Next

Compliant Solution

Const ThresholdRatio As Double = 10
Const ThresholdSize As Integer = 1024 * 1024 * 1024 ' 1 GB
Const ThresholdEntries As Integer = 10000
Dim TotalSizeArchive, TotalEntryArchive, TotalEntrySize, Cnt As Integer
Dim Buffer(1023) As Byte
Using ZipToOpen As New FileStream("ZipBomb.zip", FileMode.Open), Archive As New ZipArchive(ZipToOpen, ZipArchiveMode.Read)
    For Each Entry As ZipArchiveEntry In Archive.Entries
        Using s As Stream = Entry.Open
            TotalEntryArchive += 1
            TotalEntrySize = 0
            Do
                Cnt = s.Read(Buffer, 0, Buffer.Length)
                TotalEntrySize += Cnt
                TotalSizeArchive += Cnt
                If TotalEntrySize / Entry.CompressedLength > ThresholdRatio Then Exit Do    ' Ratio between compressed And uncompressed data Is highly suspicious, looks Like a Zip Bomb Attack
            Loop While Cnt > 0
        End Using
        If TotalSizeArchive > ThresholdSize Then Exit For       ' The uncompressed data size Is too much for the application resource capacity
        If TotalEntryArchive > ThresholdEntries Then Exit For   ' Too much entries in this archive, can lead to inodes exhaustion of the system
    Next
End Using

See

vbnet:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext.

Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution.

For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in .NET

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

Imports System.Security.Cryptography

Public Module Example

    Public Sub Encrypt()
        Dim Algorithm As New AesManaged() With {
            .KeySize = 128,
            .BlockSize = 128,
            .Mode = CipherMode.ECB, ' Noncompliant
            .Padding = PaddingMode.PKCS7
            }
    End Sub
End Module

Example with an asymmetric cipher, RSA:

Imports System.Security.Cryptography

Public Module Example

    Public Sub Encrypt()
        Dim data(10) As Byte
        Dim RsaCsp = New RSACryptoServiceProvider()
        RsaCsp.Encrypt(data, False) ' Noncompliant
    End Sub
End Module

Compliant solution

For the AES symmetric cipher, use the GCM mode:

Imports System.Security.Cryptography

Public Module Example

    Public Sub Encrypt()
        Dim data(10) As Byte
        Dim Algorithm As New AesGcm(data)
    End Sub
End Module

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

Imports System.Security.Cryptography

Public Module Example

    Public Sub Encrypt()
        Dim data(10) As Byte
        Dim RsaCsp = New RSACryptoServiceProvider()
        RsaCsp.Encrypt(data, True) ' Noncompliant
    End Sub
End Module

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: use authenticated encryption modes

The best-known authenticated encryption mode for AES is Galois/Counter mode (GCM).

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

vbnet:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in .NET

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

Imports System.Security.Cryptography

Public Sub Encrypt()
    Dim SimpleDES As New DESCryptoServiceProvider() ' Noncompliant
End Sub

Compliant solution

Imports System.Security.Cryptography

Public Sub Encrypt()
    Dim AES128ECB = Aes.Create()
End Sub

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

vbnet:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in Jwt.Net

Code examples

The following code contains examples of JWT encoding and decoding without a strong cipher algorithm.

Noncompliant code example

Imports JWT

Public Sub Decode(decoder AS IJwtDecoder)
    Dim decoded As String = decoder.Decode(token, secret, verify:= false) ' Noncompliant
End Sub
Imports JWT

Public Sub Decode()
    Dim decoded As String = new JwtBuilder()
        .WithSecret(secret)
        .Decode(token) ' Noncompliant
End Sub

Compliant solution

Imports JWT

Public Sub Decode(decoder AS IJwtDecoder)
    Dim decoded As String = decoder.Decode(token, secret, verify:= true)
End Sub

When using JwtBuilder, make sure to call MustVerifySignature().

Imports JWT

Public Sub Decode()
    Dim decoded As String = new JwtBuilder()
        .WithSecret(secret)
        .MustVerifySignature()
        .Decode(token)
End Sub

How does this work?

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue, follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take when encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

vbnet:S5773

Deserialization is the process of converting serialized data (such as objects or data structures) back into their original form. Types allowed to be unserialized should be strictly controlled.

Why is this an issue?

During the deserialization process, the state of an object will be reconstructed from the serialized data stream. By allowing unrestricted deserialization of types, the application makes it possible for attackers to use types with dangerous or otherwise sensitive behavior during the deserialization process.

What is the potential impact?

When an application deserializes untrusted data without proper restrictions, an attacker can craft malicious serialized objects. Depending on the affected objects and properties, the consequences can vary.

Remote Code Execution

If attackers can craft malicious serialized objects that contain executable code, this code will run within the application’s context, potentially gaining full control over the system. This can lead to unauthorized access, data breaches, or even complete system compromise.

For example, a well-known attack vector consists in serializing an object of type TempFileCollection with arbitrary files (defined by an attacker) which will be deleted on the application deserializing this object (when the finalize() method of the TempFileCollection object is called). These kinds of specially crafted serialized objects are called "gadgets".

Privilege escalation

Unrestricted deserialization can also enable attackers to escalate their privileges within the application. By manipulating the serialized data, an attacker can modify object properties or bypass security checks, granting them elevated privileges that they should not have. This can result in unauthorized access to sensitive data, unauthorized actions, or even administrative control over the application.

Denial of Service

In some cases, an attacker can abuse the deserialization process to cause a denial of service (DoS) condition. By providing specially crafted serialized data, the attacker can trigger excessive resource consumption, leading to system instability or unresponsiveness. This can disrupt the availability of the application, impacting its functionality and causing inconvenience to users.

How to fix it

Code examples

Noncompliant code example

With BinaryFormatter, NetDataContractSerializer or SoapFormatter:

Dim myBinaryFormatter = New BinaryFormatter()
myBinaryFormatter.Deserialize(stream) ' Noncompliant

With JavaScriptSerializer:

Dim serializer1 As JavaScriptSerializer = New JavaScriptSerializer(New SimpleTypeResolver()) ' Noncompliant: SimpleTypeResolver is insecure (every type is resolved)
serializer1.Deserialize(Of ExpectedType)(json)

Compliant solution

With BinaryFormatter, NetDataContractSerializer or SoapFormatter:

NotInheritable Class CustomBinder
    Inherits SerializationBinder
    Public Overrides Function BindToType(assemblyName As String, typeName As String) As Type
        If Not (Equals(typeName, "type1") OrElse Equals(typeName, "type2") OrElse Equals(typeName, "type3")) Then
            Throw New SerializationException("Only type1, type2 and type3 are allowed")
        End If
        Return Assembly.Load(assemblyName).[GetType](typeName)
    End Function
End Class

Dim myBinaryFormatter = New BinaryFormatter()
myBinaryFormatter.Binder = New CustomBinder()
myBinaryFormatter.Deserialize(stream)

With JavaScriptSerializer:

Public Class CustomSafeTypeResolver
    Inherits JavaScriptTypeResolver
    Public Overrides Function ResolveType(id As String) As Type
        If Not Equals(id, "ExpectedType") Then
            Throw New ArgumentNullException("Only ExpectedType is allowed during deserialization")
        End If
        Return Type.[GetType](id)
    End Function
End Class

Dim serializer As JavaScriptSerializer = New JavaScriptSerializer(New CustomSafeTypeResolver())
serializer.Deserialize(Of ExpectedType)(json)

Going the extra mile

Instead of using BinaryFormatter and similar serializers, it is recommended to use safer alternatives in most of the cases, such as XmlSerializer or DataContractSerializer.

If it’s not possible then try to mitigate the risk by restricting the types allowed to be deserialized:

  • by implementing an "allow-list" of types, but keep in mind that novel dangerous types are regularly discovered and this protection could be insufficient over time.
  • or/and implementing a tamper protection, such as message authentication codes (MAC). This way only objects serialized with the correct MAC hash will be deserialized.

Resources

Documentation

Articles & blog posts

Standards

vbnet:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

Noncompliant code example

These samples use a default TLS algorithm, which is a weak cryptographical algorithm: TLSv1.0.

Imports System.Net
Imports System.Security.Authentication

Public Sub Encrypt()
    ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls ' Noncompliant
End Sub
Imports System.Net.Http
Imports System.Security.Authentication

Public Sub Encrypt()
    Dim Handler As New HttpClientHandler With {
        .SslProtocols = SslProtocols.Tls ' Noncompliant
    }
End Sub

Compliant solution

Imports System.Net
Imports System.Security.Authentication

Public Sub Encrypt()
    ServicePointManager.SecurityProtocol = _
        SecurityProtocolType.Tls12 _
        Or SecurityProtocolType.Tls13
End Sub
Imports System.Net.Http
Imports System.Security.Authentication

Public Sub Encrypt()
    Dim Handler As New HttpClientHandler With {
        .SslProtocols = SslProtocols.Tls12
    }
End Sub

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

vbnet:S5753

ASP.NET 1.1+ comes with a feature called Request Validation, preventing the server to accept content containing un-encoded HTML. This feature comes as a first protection layer against Cross-Site Scripting (XSS) attacks and act as a simple Web Application Firewall (WAF) rejecting requests potentially containing malicious content.

While this feature is not a silver bullet to prevent all XSS attacks, it helps to catch basic ones. It will for example prevent <script type="text/javascript" src="https://malicious.domain/payload.js"> to reach your Controller.

Note: Request Validation feature being only available for ASP.NET, no Security Hotspot is raised on ASP.NET Core applications.

Ask Yourself Whether

  • the developer doesn’t know the impact to deactivate the Request Validation feature
  • the web application accepts user-supplied data
  • all user-supplied data are not validated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Activate the Request Validation feature for all HTTP requests

Sensitive Code Example

At Controller level:

<ValidateInput(False)>
Public Function Welcome(Name As String) As ActionResult
  ...
End Function

At application level, configured in the Web.config file:

<configuration>
   <system.web>
      <pages validateRequest="false" />
      ...
      <httpRuntime requestValidationMode="0.0" />
   </system.web>
</configuration>

Compliant Solution

At Controller level:

<ValidateInput(True)>
Public Function Welcome(Name As String) As ActionResult
  ...
End Function

or

Public Function Welcome(Name As String) As ActionResult
  ...
End Function

At application level, configured in the Web.config file:

<configuration>
   <system.web>
      <pages validateRequest="true" />
      ...
      <httpRuntime requestValidationMode="4.5" />
   </system.web>
</configuration>

See

vbnet:S2257

The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Standard algorithms like AES, RSA, SHA, …​ should be used instead.

This rule tracks custom implementation of these types from System.Security.Cryptography namespace:

  • AsymmetricAlgorithm
  • AsymmetricKeyExchangeDeformatter
  • AsymmetricKeyExchangeFormatter
  • AsymmetricSignatureDeformatter
  • AsymmetricSignatureFormatter
  • DeriveBytes
  • HashAlgorithm
  • ICryptoTransform
  • SymmetricAlgorithm

Recommended Secure Coding Practices

  • Use a standard algorithm instead of creating a custom one.

Sensitive Code Example

Public Class CustomHash     ' Noncompliant
    Inherits HashAlgorithm

    Private fResult() As Byte

    Public Overrides Sub Initialize()
        fResult = Nothing
    End Sub

    Protected Overrides Function HashFinal() As Byte()
        Return fResult
    End Function

    Protected Overrides Sub HashCore(array() As Byte, ibStart As Integer, cbSize As Integer)
        fResult = If(fResult, array.Take(8).ToArray)
    End Sub

End Class

Compliant Solution

Dim mySHA256 As SHA256 = SHA256.Create()

See

vbnet:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

Dim username As String = "admin"
Dim password As String = "Password123" ' Sensitive
Dim usernamePassword As String = "user=admin&password=Password123" ' Sensitive
Dim url As String = "scheme://user:Admin123@domain.com" ' Sensitive

Compliant Solution

Dim username As String = "admin"
Dim password As String = GetEncryptedPassword()
Dim usernamePassword As String = String.Format("user={0}&password={1}", GetEncryptedUsername(), GetEncryptedPassword())
Dim url As String = $"scheme://{username}:{password}@domain.com"

Dim url2 As String= "http://guest:guest@domain.com" ' Compliant
Const Password_Property As String = "custom.password" ' Compliant

Exceptions

  • Issue is not raised when URI username and password are the same.
  • Issue is not raised when searched pattern is found in variable name and value.

See

vbnet:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

Imports System.Security.Cryptography

Sub ComputeHash()

    ' Review all instantiations of classes that inherit from HashAlgorithm, for example:
    Dim hashAlgo As HashAlgorithm = HashAlgorithm.Create() ' Sensitive
    Dim hashAlgo2 As HashAlgorithm = HashAlgorithm.Create("SHA1") ' Sensitive
    Dim sha As SHA1 = New SHA1CryptoServiceProvider() ' Sensitive
    Dim md5 As MD5 = New MD5CryptoServiceProvider() ' Sensitive

    ' ...
End Sub

Class MyHashAlgorithm
    Inherits HashAlgorithm ' Sensitive

    ' ...
End Class

Compliant Solution

Imports System.Security.Cryptography

Sub ComputeHash()
    Dim sha256 = New SHA256CryptoServiceProvider() ' Compliant
    Dim sha384 = New SHA384CryptoServiceProvider() ' Compliant
    Dim sha512 = New SHA512CryptoServiceProvider() ' Compliant

    ' ...
End Sub

See

vbnet:S4792

This rule is deprecated, and will eventually be removed.

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

.Net Core: configure programmatically

Imports System
Imports System.Collections
Imports System.Collections.Generic
Imports Microsoft.AspNetCore
Imports Microsoft.AspNetCore.Builder
Imports Microsoft.AspNetCore.Hosting
Imports Microsoft.Extensions.Configuration
Imports Microsoft.Extensions.DependencyInjection
Imports Microsoft.Extensions.Logging
Imports Microsoft.Extensions.Options

Namespace MvcApp

    Public Class ProgramLogging

        Public Shared Function CreateWebHostBuilder(args As String()) As IWebHostBuilder

            WebHost.CreateDefaultBuilder(args) _
                .ConfigureLogging(Function(hostingContext, Logging) ' Sensitive
                                      ' ...
                                  End Function) _
            .UseStartup(Of StartupLogging)()

            '...
        End Function
    End Class


    Public Class StartupLogging

        Public Sub ConfigureServices(services As IServiceCollection)

            services.AddLogging(Function(logging) ' Sensitive
                                    '...
                                End Function)
        End Sub

        Public Sub Configure(app As IApplicationBuilder, env As IHostingEnvironment, loggerFactory As ILoggerFactory)

            Dim config As IConfiguration = Nothing
            Dim level As LogLevel = LogLevel.Critical
            Dim includeScopes As Boolean = False
            Dim filter As Func(Of String, Microsoft.Extensions.Logging.LogLevel, Boolean) = Nothing
            Dim consoleSettings As Microsoft.Extensions.Logging.Console.IConsoleLoggerSettings = Nothing
            Dim azureSettings As Microsoft.Extensions.Logging.AzureAppServices.AzureAppServicesDiagnosticsSettings = Nothing
            Dim eventLogSettings As Microsoft.Extensions.Logging.EventLog.EventLogSettings = Nothing

            ' An issue will be raised for each call to an ILoggerFactory extension methods adding loggers.
            loggerFactory.AddAzureWebAppDiagnostics() ' Sensitive
            loggerFactory.AddAzureWebAppDiagnostics(azureSettings) ' Sensitive
            loggerFactory.AddConsole() ' Sensitive
            loggerFactory.AddConsole(level) ' Sensitive
            loggerFactory.AddConsole(level, includeScopes) ' Sensitive
            loggerFactory.AddConsole(filter) ' Sensitive
            loggerFactory.AddConsole(filter, includeScopes) ' Sensitive
            loggerFactory.AddConsole(config) ' Sensitive
            loggerFactory.AddConsole(consoleSettings) ' Sensitive
            loggerFactory.AddDebug() ' Sensitive
            loggerFactory.AddDebug(level) ' Sensitive
            loggerFactory.AddDebug(filter) ' Sensitive
            loggerFactory.AddEventLog() ' Sensitive
            loggerFactory.AddEventLog(eventLogSettings) ' Sensitive
            loggerFactory.AddEventLog(level) ' Sensitive
            ' Only available for NET Standard 2.0 and above
            'loggerFactory.AddEventSourceLogger() ' Sensitive

            Dim providers As IEnumerable(Of ILoggerProvider) = Nothing
            Dim filterOptions1 As LoggerFilterOptions = Nothing
            Dim filterOptions2 As IOptionsMonitor(Of LoggerFilterOptions) = Nothing

            Dim factory As LoggerFactory = New LoggerFactory() ' Sensitive
            factory = New LoggerFactory(providers) ' Sensitive
            factory = New LoggerFactory(providers, filterOptions1) ' Sensitive
            factory = New LoggerFactory(providers, filterOptions2) ' Sensitive
        End Sub
    End Class
End Namespace

Log4Net

Imports System
Imports System.IO
Imports System.Xml
Imports log4net.Appender
Imports log4net.Config
Imports log4net.Repository

Namespace Logging
    Class Log4netLogging
        Private Sub Foo(ByVal repository As ILoggerRepository, ByVal element As XmlElement, ByVal configFile As FileInfo, ByVal configUri As Uri, ByVal configStream As Stream, ByVal appender As IAppender, ParamArray appenders As IAppender())
            log4net.Config.XmlConfigurator.Configure(repository) ' Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, element) ' Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configFile) ' Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configUri) ' Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configStream) ' Sensitive
            log4net.Config.XmlConfigurator.ConfigureAndWatch(repository, configFile) ' Sensitive

            log4net.Config.DOMConfigurator.Configure() ' Sensitive
            log4net.Config.DOMConfigurator.Configure(repository) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(element) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, element) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(configFile) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, configFile) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(configStream) ' Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, configStream) ' Sensitive
            log4net.Config.DOMConfigurator.ConfigureAndWatch(configFile) ' Sensitive
            log4net.Config.DOMConfigurator.ConfigureAndWatch(repository, configFile) ' Sensitive

            log4net.Config.BasicConfigurator.Configure() ' Sensitive
            log4net.Config.BasicConfigurator.Configure(appender) ' Sensitive
            log4net.Config.BasicConfigurator.Configure(appenders) ' Sensitive
            log4net.Config.BasicConfigurator.Configure(repository) ' Sensitive
            log4net.Config.BasicConfigurator.Configure(repository, appender) ' Sensitive
            log4net.Config.BasicConfigurator.Configure(repository, appenders) ' Sensitive
        End Sub
    End Class
End Namespace

NLog: configure programmatically

Namespace Logging
    Class NLogLogging
        Private Sub Foo(ByVal config As NLog.Config.LoggingConfiguration)
            NLog.LogManager.Configuration = config ' Sensitive
        End Sub
    End Class
End Namespace

Serilog

Namespace Logging
    Class SerilogLogging
        Private Sub Foo()
            Dim config As Serilog.LoggerConfiguration = New Serilog.LoggerConfiguration() ' Sensitive
        End Sub
    End Class
End Namespace

See

vbnet:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

Public Sub SqlCommands(ByVal connection As SqlConnection, ByVal query As String, ByVal param As String)
    Dim sensitiveQuery As String = String.Concat(query, param)
    command = New SqlCommand(sensitiveQuery) ' Sensitive

    command.CommandText = sensitiveQuery ' Sensitive

    Dim adapter As SqlDataAdapter
    adapter = New SqlDataAdapter(sensitiveQuery, connection) ' Sensitive
End Sub

Public Sub Foo(ByVal context As DbContext, ByVal query As String, ByVal param As String)
    Dim sensitiveQuery As String = String.Concat(query, param)
    context.Database.ExecuteSqlCommand(sensitiveQuery) ' Sensitive

    context.Query(Of User)().FromSql(sensitiveQuery) ' Sensitive
End Sub

Compliant Solution

Public Sub Foo(ByVal context As DbContext, ByVal value As String)
    context.Database.ExecuteSqlCommand("SELECT * FROM mytable WHERE mycol=@p0", value) ' Compliant, the query is parameterized
End Sub

See

vbnet:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

Imports Microsoft.AspNetCore.Mvc

Public Class MyController
    Inherits Controller

    <HttpPost>
    <DisableRequestSizeLimit> ' Sensitive: No size  limit
    <RequestSizeLimit(10485760)> ' Sensitive: 10485760 B = 10240 KB = 10 MB is more than the recommended limit of 8MB
    Public Function PostRequest(Model model) As IActionResult
    ' ...
    End Function

    <HttpPost>
    <RequestFormLimits(MultipartBodyLengthLimit = 10485760)> ' Sensitive: 10485760 B = 10240 KB = 10 MB is more than the recommended limit of 8MB
    Public Function MultipartFormRequest(Model model) As IActionResult
    ' ...
    End Function

End Class

Compliant Solution

Imports Microsoft.AspNetCore.Mvc

Public Class MyController
    Inherits Controller

    <HttpPost>
    <RequestSizeLimit(8388608)> ' Compliant: 8388608 B = 8192 KB = 8 MB
    Public Function PostRequest(Model model) As IActionResult
    ' ...
    End Function

    <HttpPost>
    <RequestFormLimits(MultipartBodyLengthLimit = 8388608)> ' Compliant: 8388608 B = 8192 KB = 8 MB
    Public Function MultipartFormRequest(Model model) AS IActionResult
    ' ...
    End Function

End Class

See

vbnet:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP, TMPDIR and TEMP.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP
  • %USERPROFILE%\AppData\Local\Temp

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Out of the box, .NET is missing secure-by-design APIs to create temporary files. To overcome this, one of the following options can be used:

  • Use a dedicated sub-folder with tightly controlled permissions
  • Created temporary files in a publicly writable folder and make sure:
    • Generated filename is unpredictable
    • File is readable and writable only by the creating user ID
    • File descriptor is not inherited by child processes
    • File is destroyed as soon as it is closed

Sensitive Code Example

Using Writer As New StreamWriter("/tmp/f") ' Sensitive
' ...
End Using
Dim Tmp As String = Environment.GetEnvironmentVariable("TMP") ' Sensitive

Compliant Solution

Dim RandomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName())

' Creates a new file with write, non inheritable permissions which is deleted on close.
Using FileStream As New FileStream(RandomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose)
    Using Writer As New StreamWriter(FileStream) ' Sensitive
    ' ...
    End Using
End Using

See

vbnet:S5445

Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic.

Why is this an issue?

Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it.

In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues.

What is the potential impact?

Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it.

Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise.

Information disclosure

Because attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive.

For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements.

Attack surface extension

An application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise.

For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over.

How to fix it

Code examples

The following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function.

Noncompliant code example

Imports System.IO

Sub Example()
    Dim TempPath = Path.GetTempFileName() 'Noncompliant

    Using Writer As New StreamWriter(TempPath)
        Writer.WriteLine("content")
    End Using
End Sub

Compliant solution

Imports System.IO

Sub Example()
    Dim RandomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName())

    Using FileStream As New FileStream(RandomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose)
        Using Writer As New StreamWriter(FileStream)
            Writer.WriteLine("content")
        End Using
    End Using
End Sub

How does this work?

Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks.

Strong security controls

Temporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose.

In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:

  • Files should be created in a non-public directory.
  • File names should be unique.
  • File names should be unpredictable. They should be generated using a cryptographically secure random generator.
  • File creation should fail if a target file already exists.

Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them.

Here the example compliant code uses the Path.GetTempPath and Path.GetRandomFileName functions to generate a unique random file name. The file is then open with the FileMode.CreateNew option that will ensure the creation fails if the file already exists. The FileShare.None option will additionally prevent the file from being opened again by any process. To finish, this code ensures the file will get destroyed once the application has finished using it with the FileOptions.DeleteOnClose option.

Resources

Documentation

  • OWASP - Insecure Temporary File

Standards

vbnet:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

Exceptions

To securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:

  • Argon2
  • PBKDF2
  • Scrypt
  • Bcrypt

When they are used for password storage, using a secure, random salt is required.

However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted.

How to fix it in .NET

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

Imports System.Security.Cryptography

Public Sub Hash(Password As String)
    Dim Salt As Byte() = Encoding.UTF8.GetBytes("salty")
    Dim Hashed As New Rfc2898DeriveBytes(Password, Salt) ' Noncompliant
End Sub

Compliant solution

Imports System.Security.Cryptography

Public Sub Hash(Password As String)
    Dim Hashed As New Rfc2898DeriveBytes(Password, 64)
End Sub

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards.

In the case of the code sample, the class automatically takes care of generating a secure salt if none is specified.

Resources

Standards

vbnet:S2612

In Unix, "others" class refers to all users except the owner of the file and the members of the group assigned to this file.

In Windows, "Everyone" group is similar and includes all members of the Authenticated Users group as well as the built-in Guest account, and several other built-in security accounts.

Granting permissions to these groups can lead to unintended access to files.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

.Net Framework:

Dim unsafeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Allow)

Dim fileSecurity = File.GetAccessControl("path")
fileSecurity.AddAccessRule(unsafeAccessRule) ' Sensitive
fileSecurity.SetAccessRule(unsafeAccessRule) ' Sensitive
File.SetAccessControl("fileName", fileSecurity)

.Net / .Net Core

Dim fileInfo = new FileInfo("path")
Dim fileSecurity = fileInfo.GetAccessControl()

fileSecurity.AddAccessRule(new FileSystemAccessRule("Everyone", FileSystemRights.Write, AccessControlType.Allow)) ' Sensitive
fileInfo.SetAccessControl(fileSecurity)

.Net / .Net Core using Mono.Posix.NETStandard

Dim fileSystemEntry = UnixFileSystemInfo.GetFileSystemEntry("path")
fileSystemEntry.FileAccessPermissions = FileAccessPermissions.OtherReadWriteExecute ' Sensitive

Compliant Solution

.Net Framework

Dim safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny)

Dim fileSecurity = File.GetAccessControl("path")
fileSecurity.AddAccessRule(safeAccessRule)
File.SetAccessControl("path", fileSecurity)

.Net / .Net Core

Dim safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny)

Dim fileInfo = new FileInfo("path")
Dim fileSecurity = fileInfo.GetAccessControl()
fileSecurity.SetAccessRule(safeAccessRule)
fileInfo.SetAccessControl(fileSecurity)

.Net / .Net Core using Mono.Posix.NETStandard

Dim fs = UnixFileSystemInfo.GetFileSystemEntry("path")
fs.FileAccessPermissions = FileAccessPermissions.UserExecute

See

vbnet:S3884

This rule is deprecated, and will eventually be removed.

Why is this an issue?

CoSetProxyBlanket and CoInitializeSecurity both work to set the permissions context in which the process invoked immediately after is executed. Calling them from within that process is useless because it’s too late at that point; the permissions context has already been set.

Specifically, these methods are meant to be called from non-managed code such as a C++ wrapper that then invokes the managed, i.e. C# or VB.NET, code.

Noncompliant code example

Public Class Noncompliant

    <DllImport("ole32.dll")>
    Public Shared Function CoSetProxyBlanket(<MarshalAs(UnmanagedType.IUnknown)>pProxy As Object, dwAuthnSvc as UInt32, dwAuthzSvc As UInt32, <MarshalAs(UnmanagedType.LPWStr)> pServerPrincName As String, dwAuthnLevel As UInt32, dwImpLevel As UInt32, pAuthInfo As IntPtr, dwCapabilities As UInt32) As Integer
    End Function

    Public Enum RpcAuthnLevel
        [Default] = 0
        None = 1
        Connect = 2
        [Call] = 3
        Pkt = 4
        PktIntegrity = 5
        PktPrivacy = 6
    End Enum

    Public Enum RpcImpLevel
        [Default] = 0
        Anonymous = 1
        Identify = 2
        Impersonate = 3
        [Delegate] = 4
    End Enum

    Public Enum EoAuthnCap
        None = &H00
        MutualAuth = &H01
        StaticCloaking = &H20
        DynamicCloaking = &H40
        AnyAuthority = &H80
        MakeFullSIC = &H100
        [Default] = &H800
        SecureRefs = &H02
        AccessControl = &H04
        AppID = &H08
        Dynamic = &H10
        RequireFullSIC = &H200
        AutoImpersonate = &H400
        NoCustomMarshal = &H2000
        DisableAAA = &H1000
    End Enum

    <DllImport("ole32.dll")>
    Public Shared Function CoInitializeSecurity(pVoid As IntPtr, cAuthSvc As Integer, asAuthSvc As IntPtr, pReserved1 As IntPtr, level As RpcAuthnLevel, impers As RpcImpLevel, pAuthList As IntPtr, dwCapabilities As EoAuthnCap, pReserved3 As IntPtr) As Integer
    End Function

    Public Sub DoSomething()
        Dim Hres1 As Integer = CoSetProxyBlanket(Nothing, 0, 0, Nothing, 0, 0, IntPtr.Zero, 0) ' Noncompliant
        Dim Hres2 As Integer = CoInitializeSecurity(IntPtr.Zero, -1, IntPtr.Zero, IntPtr.Zero, RpcAuthnLevel.None, RpcImpLevel.Impersonate, IntPtr.Zero, EoAuthnCap.None, IntPtr.Zero) ' Noncompliant
    End Sub

End Class

Resources

vbnet:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

Dim ip = "192.168.12.42" ' Sensitive
Dim address = IPAddress.Parse(ip)

Compliant Solution

Dim ip = ConfigurationManager.AppSettings("myapplication.ip") ' Compliant
Dim address = IPAddress.Parse(ip)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

vbnet:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in .NET

Code examples

In the following example, the callback change impacts the entirety of HTTP requests made by the application.

The certificate validation gets disabled by overriding ServerCertificateValidationCallback with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

Imports System.Net

Public Sub Send()
    ServicePointManager.ServerCertificateValidationCallback =
        Function(sender, certificate, chain, errors) True ' Noncompliant

    Dim request As System.Net.HttpWebRequest = System.Net.HttpWebRequest.Create(New System.Uri("https://example.com"))
    request.Method = System.Net.WebRequestMethods.Http.Get
    Dim response As System.Net.HttpWebResponse = request.GetResponse()
    response.Close()
End Sub

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

vbnet:S6444

Not specifying a timeout for regular expressions can lead to a Denial-of-Service attack. Pass a timeout when using System.Text.RegularExpressions to process untrusted input because a malicious user might craft a value for which the evaluation lasts excessively long.

Ask Yourself Whether

  • the input passed to the regular expression is untrusted.
  • the regular expression contains patterns vulnerable to catastrophic backtracking.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to specify a matchTimeout when executing a regular expression.
  • Make sure regular expressions are not vulnerable to Denial-of-Service attacks by reviewing the patterns.
  • Consider using a non-backtracking algorithm by specifying RegexOptions.NonBacktracking.

Sensitive Code Example

Public Sub RegexPattern(Input As String)
    Dim EmailPattern As New Regex(".+@.+", RegexOptions.None)
    Dim IsNumber as Boolean = Regex.IsMatch(Input, "[0-9]+")
    Dim IsLetterA as Boolean = Regex.IsMatch(Input, "(a+)+")
End Sub

Compliant Solution

Public Sub RegexPattern(Input As String)
    Dim EmailPattern As New Regex(".+@.+", RegexOptions.None, TimeSpan.FromMilliseconds(100))
    Dim IsNumber as Boolean = Regex.IsMatch(Input, "[0-9]+", RegexOptions.None, TimeSpan.FromMilliseconds(100))
    Dim IsLetterA As Boolean = Regex.IsMatch(Input, "(a+)+", RegexOptions.NonBacktracking) '.Net 7 And above
    AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT", TimeSpan.FromMilliseconds(100)) 'process-wide setting
End Sub

See

vbnet:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

Dim p As New Process()
p.StartInfo.FileName = "binary" ' Sensitive

Compliant Solution

Dim p As New Process()
p.StartInfo.FileName = "C:\Apps\binary.exe" ' Compliant

See

text:S6389

Using bidirectional (BIDI) characters can lead to incomprehensible code.

The Unicode encoding contains BIDI control characters that are used to display text right-to-left (RTL) instead of left-to-right (LTR). This is necessary for certain languages that use RTL text. The BIDI characters can be used to create a difference in the code between what a human sees and what a compiler or interpreter sees. An advisary might use this feature to hide a backdoor in the code that will not be spotted by a human reviewer as it is not visible.

This can lead to supply chain attacks since the backdoored code might persist over a long time without being detected and can even be included in other projects, for example in the case of libraries.

Ask Yourself Whether

  • This text requires a right-to-left writing system (to use Arabic or Hebrew words, for example).
  • The author of this text is a legitimate user.
  • This text contains a standard instruction, comment or sequence of characters.

There is a risk if you answered no to any of these questions.

Recommended Secure Coding Practices

Open the file in an editor that reveals non-ASCII characters and remove all BIDI control characters that are not intended.

If hidden characters are illegitimate, this issue could indicate a potential ongoing attack on the code. Therefore, it would be best to warn your organization’s security team about this issue.

Required opening BIDI characters should be explicitly closed with the PDI character.

Sensitive Code Example

A hidden BIDI character is present in front of return:

def subtract_funds(account: str, amount: int):
    ''' Subtract funds from bank account then ⁧''' ;return
    bank[account] -= amount
    return

The executed code looks like the following:

def subtract_funds(account: str, amount: int):
    ''' Subtract funds from bank account then <RLI>''' ;return
    bank[account] -= amount
    return

Compliant Solution

No hidden BIDI characters are present:

def subtract_funds(account: str, amount: int):
    ''' Subtract funds from bank account then return; '''
    bank[account] -= amount
    return

See

typescript:S5732

Clickjacking attacks occur when an attacker try to trick an user to click on certain buttons/links of a legit website. This attack can take place with malicious HTML frames well hidden in an attacker website.

For instance, suppose a safe and authentic page of a social network (https://socialnetworkexample.com/makemyprofilpublic) which allows an user to change the visibility of his profile by clicking on a button. This is a critical feature with high privacy concerns. Users are generally well informed on the social network of the consequences of this action. An attacker can trick users, without their consent, to do this action with the below embedded code added on a malicious website:

<html>
<b>Click on the button below to win 5000$</b>
<br>
<iframe src="https://socialnetworkexample.com/makemyprofilpublic" width="200" height="200"></iframe>
</html>

Playing with the size of the iframe it’s sometimes possible to display only the critical parts of a page, in this case the button of the makemyprofilpublic page.

Ask Yourself Whether

  • Critical actions of the application are prone to clickjacking attacks because a simple click on a link or a button can trigger them.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy frame-ancestors directive which is supported by all modern browsers and will specify the origins of frame allowed to be loaded by the browser (this directive deprecates X-Frame-Options).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the frameAncestors directive (or if frameAncestors is set to 'none'):

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'none'"] // Sensitive: frameAncestors  is set to none
    }
  })
);

Compliant Solution

In Express.js application a standard way to implement CSP frame-ancestors directive is the helmet-csp or helmet middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'example.com'"] // Compliant
    }
  })
);

See

typescript:S5730

A mixed-content is when a resource is loaded with the HTTP protocol, from a website accessed with the HTTPs protocol, thus mixed-content are not encrypted and exposed to MITM attacks and could break the entire level of protection that was desired by implementing encryption with the HTTPs protocol.

The main threat with mixed-content is not only the confidentiality of resources but the whole website integrity:

  • A passive mixed-content (eg: <img src="http://example.com/picture.png">) allows an attacker to access and replace only these resources, like images, with malicious ones that could lead to successful phishing attacks.
  • With active mixed-content (eg: <script src="http://example.com/library.js">) an attacker can compromise the entire website by injecting malicious javascript code for example (accessing and modifying the DOM, steal cookies, etc).

Ask Yourself Whether

  • The HTTPS protocol is in place and external resources are fetched from the website pages.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy block-all-mixed-content directive which is supported by all modern browsers and will block loading of mixed-contents.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com']
    } // Sensitive: blockAllMixedContent directive is missing
  })
);

Compliant Solution

In Express.js application a standard way to block mixed-content is to put in place the helmet-csp or helmet middleware with the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com'],
      blockAllMixedContent: [] // Compliant
    }
  })
);

See

typescript:S5734

MIME confusion attacks occur when an attacker successfully tricks a web-browser to interpret a resource as a different type than the one expected. To correctly interpret a resource (script, image, stylesheet …​) web browsers look for the Content-Type header defined in the HTTP response received from the server, but often this header is not set or is set with an incorrect value. To avoid content-type mismatch and to provide the best user experience, web browsers try to deduce the right content-type, generally by inspecting the content of the resources (the first bytes). This "guess mechanism" is called MIME type sniffing.

Attackers can take advantage of this feature when a website ("example.com" here) allows to upload arbitrary files. In that case, an attacker can upload a malicious image fakeimage.png (containing malicious JavaScript code or a polyglot content file) such as:

<script>alert(document.cookie)</script>

When the victim will visit the website showing the uploaded image, the malicious script embedded into the image will be executed by web browsers performing MIME type sniffing.

Ask Yourself Whether

  • Content-Type header is not systematically set for all resources.
  • Content of resources can be controlled by users.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Implement X-Content-Type-Options header with nosniff value (the only existing value for this header) which is supported by all modern browsers and will prevent browsers from performing MIME type sniffing, so that in case of Content-Type header mismatch, the resource is not interpreted. For example within a <script> object context, JavaScript MIME types are expected (like application/javascript) in the Content-Type header.

Sensitive Code Example

In Express.js application the code is sensitive if, when using helmet, the noSniff middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet({
    noSniff: false, // Sensitive
  })
);

Compliant Solution

When using helmet in an Express.js application, the noSniff middleware should be enabled (it is also done by default):

const express = require('express');
const helmet= require('helmet');

let app = express();

app.use(helmet.noSniff());

See

typescript:S5736

HTTP header referer contains a URL set by web browsers and used by applications to track from where the user came from, it’s for instance a relevant value for web analytic services, but it can cause serious privacy and security problems if the URL contains confidential information. Note that Firefox for instance, to prevent data leaks, removes path information in the Referer header while browsing privately.

Suppose an e-commerce website asks the user his credit card number to purchase a product:

<html>
<body>
<form action="/valid_order" method="GET">
Type your credit card number to purchase products:
<input type=text id="cc" value="1111-2222-3333-4444">
<input type=submit>
</form>
</body>

When submitting the above HTML form, a HTTP GET request will be performed, the URL requested will be https://example.com/valid_order?cc=1111-2222-3333-4444 with credit card number inside and it’s obviously not secure for these reasons:

  • URLs are stored in the history of browsers.
  • URLs could be accidentally shared when doing copy/paste actions.
  • URLs can be stolen if a malicious person looks at the computer screen of an user.

In addition to these threats, when further requests will be performed from the "valid_order" page with a simple legitimate embedded script like that:

<script src="https://webanalyticservices_example.com/track">

The referer header which contains confidential information will be send to a third party web analytic service and cause privacy issue:

GET /track HTTP/2.0
Host: webanalyticservices_example.com
Referer: https://example.com/valid_order?cc=1111-2222-3333-4444

Ask Yourself Whether

  • Confidential information exists in URLs.
  • Semantic of HTTP methods is not respected (eg: use of a GET method instead of POST when the state of the application is changed).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Confidential information should not be set inside URLs (GET requests) of the application and a safe (ie: different from unsafe-url or no-referrer-when-downgrade) referrer-Policy header, to control how much information is included in the referer header, should be used.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet referrerPolicy middleware is disabled or used with no-referrer-when-downgrade or unsafe-url:

const express = require('express');
const helmet = require('helmet');

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer-when-downgrade' // Sensitive: no-referrer-when-downgrade is used
  })
);

Compliant Solution

In Express.js application a secure solution is to user the helmet referrer policy middleware set to no-referrer:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer' // Compliant
  })
);

See

typescript:S5852

Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars).

This rule determines the runtime complexity of a regular expression and informs you if it is not linear.

Ask Yourself Whether

  • The input is user-controlled.
  • The input size is not restricted to a small number of characters.
  • There is no timeout in place to limit the regex evaluation time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression.

In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen.

  • If you have a repetition r* or r*?, such that the regex r could produce different possible matches (of possibly different lengths) on the same input, the worst case matching time can be exponential. This can be the case if r contains optional parts, alternations or additional repetitions (but not if the repetition is written in such a way that there’s only one way to match it).
  • If you have multiple repetitions that can match the same contents and are consecutive or are only separated by an optional separator or a separator that can be matched by both of the repetitions, the worst case matching time can be polynomial (O(n^c) where c is the number of problematic repetitions). For example a*b* is not a problem because a* and b* match different things and a*_a* is not a problem because the repetitions are separated by a '_' and can’t match that '_'. However, a*a* and .*_.* have quadratic runtime.
  • If the regex is not anchored to the beginning of the string, quadratic runtime is especially hard to avoid because whenever a match fails, the regex engine will try again starting at the next index. This means that any unbounded repetition, if it’s followed by a pattern that can fail, can cause quadratic runtime on some inputs. For example str.split(/\s*,/) will run in quadratic time on strings that consist entirely of spaces (or at least contain large sequences of spaces, not followed by a comma).

In order to rewrite your regular expression without these patterns, consider the following strategies:

  • If applicable, define a maximum number of expected repetitions using the bounded quantifiers, like {1,5} instead of + for instance.
  • Refactor nested quantifiers to limit the number of way the inner group can be matched by the outer quantifier, for instance this nested quantifier situation (ba+)+ doesn’t cause performance issues, indeed, the inner group can be matched only if there exists exactly one b char per repetition of the group.
  • Optimize regular expressions by emulating possessive quantifiers and atomic grouping.
  • Use negated character classes instead of . to exclude separators where applicable. For example the quadratic regex .*_.* can be made linear by changing it to [^_]*_.*

Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when the regex is not anchored to the beginning of the string, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:

  • Solve the problem without regular expressions
  • Use an alternative non-backtracking regex implementations such as Google’s RE2 or node-re2.
  • Use multiple passes. This could mean pre- and/or post-processing the string manually before/after applying the regular expression to it or using multiple regular expressions. One example of this would be to replace str.split(/\s*,\s*/) with str.split(",") and then trimming the spaces from the strings as a second step.
  • It is often possible to make the regex infallible by making all the parts that could fail optional, which will prevent backtracking. Of course this means that you’ll accept more strings than intended, but this can be handled by using capturing groups to check whether the optional parts were matched or not and then ignoring the match if they weren’t. For example the regex x*y could be replaced with x*(y)? and then the call to str.match(regex) could be replaced with matched = str.match(regex) and matched[1] !== undefined.

Sensitive Code Example

The regex evaluation will never end:

/(a+)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Sensitive

Compliant Solution

Possessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues. Unfortunately, they are not supported in JavaScript, but one can still mimick them using lookahead assertions and backreferences:

/((?=(a+))\2)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Compliant

See

typescript:S2598

Why is this an issue?

If the file upload feature is implemented without proper folder restriction, it will result in an implicit trust violation within the server, as trusted files will be implicitly stored alongside third-party files that should be considered untrusted.

This can allow an attacker to disrupt the security of an internal server process or the running application.

What is the potential impact?

After discovering this vulnerability, attackers may attempt to upload as many different file types as possible, such as javascript files, bash scripts, malware, or malicious configuration files targeting potential processes.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Full application compromise

In the worst-case scenario, the attackers succeed in uploading a file recognized by in an internal tool, triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

Server Resource Exhaustion

By repeatedly uploading large files, an attacker can consume excessive server resources, resulting in a denial of service.

If the component affected by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service can only affect the attacker who caused it.

Even though a denial of service might have little direct impact, it can have secondary impact in architectures that use containers and container orchestrators. For example, it can cause unexpected container failures or overuse of resources.

In some cases, it is also possible to force the product to "fail open" when resources are exhausted, which means that some security features are disabled in an emergency.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Formidable

Code examples

Noncompliant code example

const Formidable = require('formidable');

const form          = new Formidable(); // Noncompliant
form.uploadDir      = "/tmp/";
form.keepExtensions = true;

Compliant solution

const Formidable = require('formidable');

const form          = new Formidable();
form.uploadDir      = "/uploads/";
form.keepExtensions = false;

How does this work?

Use pre-approved folders

Create a special folder where untrusted data should be stored. This folder should be classified as untrusted and have the following characteristics:

  • It should have specific read and write permissions that belong to the right people or organizations.
  • It should have a size limit or its size should be monitored.
  • It should contain backup copies if it contains data that belongs to users.

This folder should not be located in /tmp, /var/tmp or in the Windows directory %TEMP%.
These folders are usually "world-writable", can be manipulated, and can be accidentally deleted by the system.

Also, the original file names and extensions should be changed to controlled strings to prevent unwanted code from being executed based on the file names.

Resources

typescript:S5739

When implementing the HTTPS protocol, the website mostly continue to support the HTTP protocol to redirect users to HTTPS when they request a HTTP version of the website. These redirects are not encrypted and are therefore vulnerable to man in the middle attacks. The Strict-Transport-Security policy header (HSTS) set by an application instructs the web browser to convert any HTTP request to HTTPS.

Web browsers that see the Strict-Transport-Security policy header for the first time record information specified in the header:

  • the max-age directive which specify how long the policy should be kept on the web browser.
  • the includeSubDomains optional directive which specify if the policy should apply on all sub-domains or not.
  • the preload optional directive which is not part of the HSTS specification but supported on all modern web browsers.

With the preload directive the web browser never connects in HTTP to the website and to use this directive, it is required to submit the concerned application to a preload service maintained by Google.

Ask Yourself Whether

  • The website is accessible with the unencrypted HTTP protocol.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Strict-Transport-Security policy header, it is recommended to apply this policy to all subdomains (includeSubDomains) and for at least 6 months (max-age=15552000) or even better for 1 year (max-age=31536000).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet or hsts middleware are disabled or used without recommended values:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 3153600, // Sensitive, recommended >= 15552000
  includeSubDomains: false // Sensitive, recommended 'true'
}));

Compliant Solution

In Express.js application a standard way to implement HSTS is with the helmet or hsts middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 31536000,
  includeSubDomains: true
})); // Compliant

See

typescript:S5742

Certificate Transparency (CT) is an open-framework to protect against identity theft when certificates are issued. Certificate Authorities (CA) electronically sign certificate after verifying the identify of the certificate owner. Attackers use, among other things, social engineering attacks to trick a CA to correctly verifying a spoofed identity/forged certificate.

CAs implement Certificate Transparency framework to publicly log the records of newly issued certificates, allowing the public and in particular the identity owner to monitor these logs to verify that his identify was not usurped.

Ask Yourself Whether

  • The website identity is valuable and well-known, therefore prone to theft.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Expect-CT HTTP header which instructs the web browser to check public CT logs in order to verify if the website appears inside and if it is not, the browser will block the request and display a warning to the user.

Sensitive Code Example

In Express.js application the code is sensitive if the expect-ct middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
    helmet({
      expectCt: false // Sensitive
    })
);

Compliant Solution

In Express.js application the expect-ct middleware is the standard way to implement expect-ct. Usually, the deployment of this policy starts with the report only mode (enforce: false) and with a low maxAge (the number of seconds the policy will apply) value and next if everything works well it is recommended to block future connections that violate Expect-CT policy (enforce: true) and greater value for maxAge directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.expectCt({
  enforce: true,
  maxAge: 86400
})); // Compliant

See

typescript:S5743

This rule is deprecated, and will eventually be removed.

By default, web browsers perform DNS prefetching to reduce latency due to DNS resolutions required when an user clicks links from a website page.

For instance on example.com the hyperlink below contains a cross-origin domain name that must be resolved to an IP address by the web browser:

<a href="https://otherexample.com">go on our partner website</a>

It can add significant latency during requests, especially if the page contains many links to cross-origin domains. DNS prefetch allows web browsers to perform DNS resolving in the background before the user clicks a link. This feature can cause privacy issues because DNS resolving from the user’s computer is performed without his consent if he doesn’t intent to go to the linked website.

On a complex private webpage, a combination "of unique links/DNS resolutions" can indicate, to a eavesdropper for instance, that the user is visiting the private page.

Ask Yourself Whether

  • Links to cross-origin domains could result in leakage of confidential information about the user’s navigation/behavior of the website.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement X-DNS-Prefetch-Control header with an off value but this could significantly degrade website performances.

Sensitive Code Example

In Express.js application the code is sensitive if the dns-prefetch-control middleware is disabled or used without the recommended value:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: true // Sensitive: allowing DNS prefetching is security-sensitive
  })
);

Compliant Solution

In Express.js application the dns-prefetch-control or helmet middleware is the standard way to implement X-DNS-Prefetch-Control header:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: false // Compliant
  })
);

See

typescript:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

Express.js CSURF middleware protection is not found on an unsafe HTTP method like POST method:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie: true });

let app = express();

// Sensitive: this operation doesn't look like protected by CSURF middleware (csrfProtection is not used)
app.post('/money_transfer', parseForm, function (req, res) {
  res.send('Money transferred');
});

Protection provided by Express.js CSURF middleware is globally disabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["POST", "GET"] })); // Sensitive as POST is unsafe method

Compliant Solution

Express.js CSURF middleware protection is used on unsafe methods:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie:  true });

let app = express();

app.post('/money_transfer', parseForm, csrfProtection, function (req, res) { // Compliant
  res.send('Money transferred')
});

Protection provided by Express.js CSURF middleware is enabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["GET"] })); // Compliant

See

typescript:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

errorhandler Express.js middleware should not be used in production:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();
app.use(errorhandler()); // Sensitive

Compliant Solution

errorhandler Express.js middleware used only in development mode:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();

if (process.env.NODE_ENV === 'development') {
  app.use(errorhandler());
}

See

typescript:S5604

Powerful features are browser features (geolocation, camera, microphone …​) that can be accessed with JavaScript API and may require a permission granted by the user. These features can have a high impact on privacy and user security thus they should only be used if they are really necessary to implement the critical parts of an application.

This rule highlights intrusive permissions when requested with the future standard (but currently experimental) web browser query API and specific APIs related to the permission. It is highly recommended to customize this rule with the permissions considered as intrusive in the context of the web application.

Ask Yourself Whether

  • Some powerful features used by the application are not really necessary.
  • Users are not clearly informed why and when powerful features are used by the application.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In order to respect user privacy it is recommended to avoid using intrusive powerful features.

Sensitive Code Example

When using geolocation API, Firefox for example retrieves personal information like nearby wireless access points and IP address and sends it to the default geolocation service provider, Google Location Services:

navigator.permissions.query({name:"geolocation"}).then(function(result) {
});  // Sensitive: geolocation is a powerful feature with high privacy concerns

navigator.geolocation.getCurrentPosition(function(position) {
  console.log("coordinates x="+position.coords.latitude+" and y="+position.coords.longitude);
}); // Sensitive: geolocation is a powerful feature with high privacy concerns

Compliant Solution

If geolocation is required, always explain to the user why the application needs it and prefer requesting an approximate location when possible:

<html>
<head>
    <title>
        Retailer website example
    </title>
</head>
<body>
    Type a city, street or zip code where you want to retrieve the closest retail locations of our products:
    <form method=post>
        <input type=text value="New York"> <!-- Compliant -->
    </form>
</body>
</html>

See

typescript:S5725

Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application.

On the client side, where front-end code is executed, malicious code could:

  • impersonate users' identities and take advantage of their privileges on the application.
  • add quiet malware that monitors users' session and capture sensitive secrets.
  • gain access to sensitive clients' personal data.
  • deface, or otherwise affect the general availability of the application.
  • mine cryptocurrencies in the background.

Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:

  • access and modify sensitive technical and business data.
  • elevate its privileges on the underlying operating system.
  • Use the compromised application as a pivot to attack the local network.

By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes applied to it before it is downloaded.
Especially, integrity checks will allow for identifying an artifact replaced by malware on the publication website or that was legitimately changed by its author, in a more benign scenario.

Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself.

Ask Yourself Whether

  • The artifact is a file intended to execute code.
  • The artifact is a file that is intended to configure or affect running code in some way.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed.

In this case, the artifact’s hash must:

  • Be computed with a secure hash algorithm such as SHA512, SHA384 or SHA256.
  • Be compared with a secure hash that was not downloaded from the same source.

To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings.

Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes.

Sensitive Code Example

The following code sample uses neither integrity checks nor version pinning:

let script = document.createElement("script");
script.src = "https://cdn.example.com/latest/script.js"; // Sensitive
script.crossOrigin = "anonymous";
document.head.appendChild(script);

Compliant Solution

let script = document.createElement("script");
script.src = "https://cdn.example.com/v5.3.6/script.js";
script.integrity = "sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC";
script.crossOrigin = "anonymous";
document.head.appendChild(script);

See

typescript:S5728

Content security policy (CSP) (fetch directives) is a W3C standard which is used by a server to specify, via a http header, the origins from where the browser is allowed to load resources. It can help to mitigate the risk of cross site scripting (XSS) attacks and reduce privileges used by an application. If the website doesn’t define CSP header the browser will apply same-origin policy by default.

Content-Security-Policy: default-src 'self'; script-src ‘self ‘ http://www.example.com

In the above example, all resources are allowed from the website where this header is set and script resources fetched from example.com are also authorized:

<img src="selfhostedimage.png></script> <!-- will be loaded because default-src 'self'; directive is applied  -->
<img src="http://www.example.com/image.png></script>  <!-- will NOT be loaded because default-src 'self'; directive is applied  -->
<script src="http://www.example.com/library.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->
<script src="selfhostedscript.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.com directive is applied  -->
<script src="http://www.otherexample.com/library.js></script> <!-- will NOT be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->

Ask Yourself Whether

  • The resources of the application are fetched from various untrusted locations.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy fetch directives, in particular default-src directive and continue to properly sanitize and validate all inputs of the application, indeed CSP fetch directives is only a tool to reduce the impact of cross site scripting attacks.

Sensitive Code Example

In a Express.js application, the code is sensitive if the helmet contentSecurityPolicy middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(
    helmet({
      contentSecurityPolicy: false, // sensitive
    })
);

Compliant Solution

In a Express.js application, a standard way to implement CSP is the helmet contentSecurityPolicy middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(helmet.contentSecurityPolicy()); // Compliant

See

typescript:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext.

Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution.

For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-128-CBC", key, iv); // Noncompliant

Compliant solution

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: use authenticated encryption modes

The best-known authenticated encryption mode for AES is Galois/Counter mode (GCM).

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

typescript:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

const crypto = require('crypto');

crypto.createCipheriv("DES", key, iv); // Noncompliant

Compliant solution

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

typescript:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in jsonwebtoken

Code examples

The following code contains examples of JWT encoding and decoding without a strong cipher algorithm.

Noncompliant code example

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'none' }); // Noncompliant
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['none'] // Noncompliant
}, callbackcheck);

Compliant solution

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'HS256' });
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['HS256']
}, callbackcheck);

How does this work?

Always sign your tokens

The foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created.

Choose a strong cipher algorithm

It is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens.

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue, follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take when encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

typescript:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the Math.random() function relies on a weak pseudorandom number generator, this function should not be used for security-critical applications or for protecting sensitive data. In such context, a cryptographically strong pseudorandom number generator (CSPRNG) should be used instead.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a cryptographically strong pseudorandom number generator (CSPRNG) like crypto.getRandomValues().
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

const val = Math.random(); // Sensitive
// Check if val is used in a security context.

Compliant Solution

// === Client side ===
const crypto = window.crypto || window.msCrypto;
var array = new Uint32Array(1);
crypto.getRandomValues(array); // Compliant for security-sensitive use cases

// === Server side ===
const crypto = require('crypto');
const buf = crypto.randomBytes(1); // Compliant for security-sensitive use cases

See

typescript:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

Noncompliant code example

NodeJs offers multiple ways to set weak TLS protocols. For https and tls, these options are used and are used in other third-party libraries as well.

The first is secureProtocol:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
 secureProtocol: 'TLSv1_method' // Noncompliant
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

The second is the combination of minVersion and maxVerison. Note that they cannot be specified along with the secureProtocol option:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.1',  // Noncompliant
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

And secureOptions, which in this example instructs the OpenSSL protocol to turn off some algorithms altogether. In general, this option might trigger side effects and should be used carefully, if used at all.

const https     = require('node:https');
const tls       = require('node:tls');
const constants = require('node:crypto'):

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
}; // Noncompliant

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Compliant solution

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });
const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.2',
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Here, the goal is to turn on only TLSv1.2 and higher, by turning off all lower versions:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
    | constants.SSL_OP_NO_TLSv1_1
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

typescript:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

// === Client side ===
crypto.subtle.encrypt(algo, key, plainData); // Sensitive
crypto.subtle.decrypt(algo, key, encData); // Sensitive
// === Server side ===
const crypto = require("crypto");
const cipher = crypto.createCipher(algo, key); // Sensitive
const cipheriv = crypto.createCipheriv(algo, key, iv); // Sensitive
const decipher = crypto.createDecipher(algo, key); // Sensitive
const decipheriv = crypto.createDecipheriv(algo, key, iv); // Sensitive
const pubEnc = crypto.publicEncrypt(key, buf); // Sensitive
const privDec = crypto.privateDecrypt({ key: key, passphrase: secret }, pubEnc); // Sensitive
const privEnc = crypto.privateEncrypt({ key: key, passphrase: secret }, buf); // Sensitive
const pubDec = crypto.publicDecrypt(key, privEnc); // Sensitive

See

typescript:S5876

An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled.

Why is this an issue?

Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:

  • When a user visits a website or logs in, a session is created for them.
  • This session is assigned a unique session identifier, stored in a cookie, in local storage, or through URL parameters.
  • In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. For example, the attacker sends the victim an email containing a link with this predetermined session identifier.
  • When the victim clicks on the link, the web application does not create a new session identifier but uses this identifier known to the attacker.
  • At this point, the attacker can hijack and impersonate the victim’s session.

What is the potential impact?

Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following:

Impersonation

Once an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf.

Data Breach

If an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes.

Privilege Escalation

In some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems.

How to fix it in Passport

Code examples

Upon user authentication, it is crucial to regenerate the session identifier to prevent fixation attacks. Passport provides a mechanism to achieve this by using the req.session.regenerate() method. By calling this method after successful authentication, you can ensure that each user is assigned a new and unique session ID.

Noncompliant code example

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    // Noncompliant - no session.regenerate after login
    res.redirect('/');
  });

Compliant solution

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    let prevSession = req.session;
    req.session.regenerate((err) => {
      Object.assign(req.session, prevSession);
      res.redirect('/');
    });
  });

How does this work?

The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process.

Here’s how session fixation protection typically works:

  1. When a user visits a website or logs in, a session is created for them. This session is assigned a unique session identifier, which is stored in a cookie or passed through URL parameters.
  2. In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. This allows the attacker to potentially gain unauthorized access to the user’s session.
  3. To protect against session fixation attacks, session fixation protection mechanisms come into play during the authentication process. When a user successfully authenticates, this mechanism generates a new session identifier for the user’s session.
  4. The old session identifier, which may have been manipulated by the attacker, is invalidated and no longer associated with the user’s session. This ensures that any attempts by the attacker to use the fixed session identifier are rendered ineffective.
  5. The user is then assigned the new session identifier, which is used for subsequent requests and session tracking. This new session identifier is typically stored in a new session cookie or passed through URL parameters.

By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process.

Resources

Documentation

Articles & blog posts

Standards

typescript:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

cookie-session module:

let session = cookieSession({
  httpOnly: false,// Sensitive
});  // Sensitive

express-session module:

const express = require('express'),
const session = require('express-session'),

let app = express()
app.use(session({
  cookie:
  {
    httpOnly: false // Sensitive
  }
})),

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: false // Sensitive
}); // Sensitive

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: false }}); // Sensitive

Compliant Solution

cookie-session module:

let session = cookieSession({
  httpOnly: true,// Compliant
});  // Compliant

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    httpOnly: true // Compliant
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: true // Compliant
}); // Compliant

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: true }}); // Compliant

See

typescript:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp112r2', // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Compliant solution

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp224k1',
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms is mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC-based algorithms.

Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:

  • NIST P curves with a size of at least 224 bits, e.g. secp256r1.
  • Curve25519, generally known as ed25519 or x25519 depending on its application.
  • Curve448.
  • Brainpool curves with a size of at least 224 bits, e.g. brainpoolP224r1

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

typescript:S4784

This rule is deprecated; use S5852 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as (a+)+s will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following characters: *+{.

Example: (a+)*

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Sensitive Code Example

const regex = /(a+)+b/; // Sensitive
const regex2 = new RegExp("(a+)+b"); // Sensitive

str.search("(a+)+b"); // Sensitive
str.match("(a+)+b"); // Sensitive
str.split("(a+)+b"); // Sensitive

Note: String.matchAll does not raise any issue as it is not supported by NodeJS.

Exceptions

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

typescript:S5757

Log management is an important topic, especially for the security of a web application, to ensure user activity, including potential attackers, is recorded and available for an analyst to understand what’s happened on the web application in case of malicious activities.

Retention of specific logs for a defined period of time is often necessary to comply with regulations such as GDPR, PCI DSS and others. However, to protect user’s privacy, certain informations are forbidden or strongly discouraged from being logged, such as user passwords or credit card numbers, which obviously should not be stored or at least not in clear text.

Ask Yourself Whether

In a production environment:

  • The web application uses confidential information and logs a significant amount of data.
  • Logs are externalized to SIEM or Big Data repositories.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Loggers should be configured with a list of confidential, personal information that will be hidden/masked or removed from logs.

Sensitive Code Example

With Signale log management framework the code is sensitive when an empty list of secrets is defined:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: []         // empty list of secrets
};

const logger = new Signale(options); // Sensitive

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

Compliant Solution

With Signale log management framework it is possible to define a list of secrets that will be hidden in logs:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: ["([0-9]{4}-?)+"]
};

const logger = new Signale(options); // Compliant

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

See

typescript:S5759

Users often connect to web servers through HTTP proxies.

Proxy can be configured to forward the client IP address via the X-Forwarded-For or Forwarded HTTP headers.

IP address is a personal information which can identify a single user and thus impact his privacy.

Ask Yourself Whether

  • The web application uses reverse proxies or similar but doesn’t need to know the IP address of the user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

User IP address should not be forwarded unless the application needs it, as part of an authentication, authorization scheme or log management for examples.

Sensitive Code Example

node-http-proxy

var httpProxy = require('http-proxy');

httpProxy.createProxyServer({target:'http://localhost:9000', xfwd:true}) // Noncompliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true, xfwd: true })); // Noncompliant
app.listen(3000);

Compliant Solution

node-http-proxy

var httpProxy = require('http-proxy');

// By default xfwd option is false
httpProxy.createProxyServer({target:'http://localhost:9000'}) // Compliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

// By default xfwd option is false
app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true})); // Compliant
app.listen(3000);

See

typescript:S2255

This rule is deprecated, and will eventually be removed.

Using cookies is security-sensitive. It has led in the past to the following vulnerabilities:

Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed.

This rule flags code that writes cookies.

Ask Yourself Whether

  • sensitive information is stored inside the cookie.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Cookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session.

Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed.

Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies.

As a side note, every information read from a cookie should be Sanitized.

Sensitive Code Example

// === Built-in NodeJS modules ===
const http = require('http');
const https = require('https');

http.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
https.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
// === ExpressJS ===
const express = require('express');
const app = express();
app.use(function(req, res, next) {
  res.cookie('name', 'John'); // Sensitive
});
// === In browser ===
// Set cookie
document.cookie = "name=John"; // Sensitive

See

typescript:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

const crypto = require("crypto");

const hash = crypto.createHash('sha1'); // Sensitive

Compliant Solution

const crypto = require("crypto");

const hash = crypto.createHash('sha512'); // Compliant

See

typescript:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Node.js

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by overriding checkServerIdentity with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

typescript:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in libxmljs

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml, {
    noblanks: true,
    noent: true, // Noncompliant
    nocdata: true
});

Compliant solution

parseXmlString is safe by default.

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml);

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

typescript:S4817

This rule is deprecated, and will eventually be removed.

Executing XPATH expressions is security-sensitive. It has led in the past to the following vulnerabilities:

User-provided data such as URL parameters should always be considered as untrusted and tainted. Constructing XPath expressions directly from tainted data enables attackers to inject specially crafted values that changes the initial meaning of the expression itself. Successful XPath injections attacks can read sensitive information from the XML document.

Ask Yourself Whether

  • the XPATH expression might contain some unsafe input coming from a user.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize any user input before using it in an XPATH expression.

Sensitive Code Example

// === Server side ===

var xpath = require('xpath');
var xmldom = require('xmldom');

var doc = new xmldom.DOMParser().parseFromString(xml);
var nodes = xpath.select(userinput, doc); // Sensitive
var node = xpath.select1(userinput, doc); // Sensitive
// === Client side ===

// Chrome, Firefox, Edge, Opera, and Safari use the evaluate() method to select nodes:
var nodes = document.evaluate(userinput, xmlDoc, null, XPathResult.ANY_TYPE, null); // Sensitive

// Internet Explorer uses its own methods to select nodes:
var nodes = xmlDoc.selectNodes(userinput); // Sensitive
var node = xmlDoc.SelectSingleNode(userinput); // Sensitive

See

typescript:S4818

This rule is deprecated, and will eventually be removed.

Using sockets is security-sensitive. It has led in the past to the following vulnerabilities:

Sockets are vulnerable in multiple ways:

  • They enable a software to interact with the outside world. As this world is full of attackers it is necessary to check that they cannot receive sensitive information or inject dangerous input.
  • The number of sockets is limited and can be exhausted. Which makes the application unresponsive to users who need additional sockets.

This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections.

Ask Yourself Whether

  • sockets are created without any limit every time a user performs an action.
  • input received from sockets is used without being sanitized.
  • sensitive data is sent via sockets without being encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In many cases there is no need to open a socket yourself. Use instead libraries and existing protocols.
  • Encrypt all data sent if it is sensitive. Usually it is better to encrypt it even if the data is not sensitive as it might change later.
  • Sanitize any input read from the socket.
  • Limit the number of sockets a given user can create. Close the sockets as soon as possible.

Sensitive Code Example

const net = require('net');

var socket = new net.Socket(); // Sensitive
socket.connect(80, 'google.com');

// net.createConnection creates a new net.Socket, initiates connection with socket.connect(), then returns the net.Socket that starts the connection
net.createConnection({ port: port }, () => {}); // Sensitive

// net.connect is an alias to net.createConnection
net.connect({ port: port }, () => {}); // Sensitive

See

typescript:S1523

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security.

This rule raises issues on calls to eval and Function constructor. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. The goal is to guide security code reviews.

The rule also flags string literals starting with javascript: as the code passed in javascript: URLs is evaluated the same way as calls to eval or Function constructor.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

let value = eval('obj.' + propName); // Sensitive
let func = Function('obj' + propName); // Sensitive
location.href = 'javascript:void(0)'; // Sensitive

Exceptions

This rule will not raise an issue when the argument of the eval or Function is a literal string as it is reasonably safe.

See

typescript:S1525

This rule is deprecated; use S4507 instead.

Why is this an issue?

The debugger statement can be placed anywhere in procedures to suspend execution. Using the debugger statement is similar to setting a breakpoint in the code. By definition such statement must absolutely be removed from the source code to prevent any unexpected behavior or added vulnerability to attacks in production.

Noncompliant code example

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
  // Wait for user to resume.
  debugger;
}

Compliant solution

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
}

Resources

typescript:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o777); // Sensitive
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o000); // Sensitive

Compliant Solution

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o770); // Compliant
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o007); // Compliant

See

typescript:S4721

Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands.

Ask Yourself Whether

  • OS command name or parameters are user-controlled.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use functions that don’t spawn a shell.

Sensitive Code Example

const cp = require('child_process');

// A shell will be spawn in these following cases:
cp.exec(cmd); // Sensitive
cp.execSync(cmd); // Sensitive

cp.spawn(cmd, { shell: true }); // Sensitive
cp.spawnSync(cmd, { shell: true }); // Sensitive
cp.execFile(cmd, { shell: true }); // Sensitive
cp.execFileSync(cmd, { shell: true }); // Sensitive

Compliant Solution

const cp = require('child_process');

cp.spawnSync("/usr/bin/file.exe", { shell: false }); // Compliant

See

typescript:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = "192.168.12.42"; // Sensitive

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Compliant Solution

ip = process.env.IP_ADDRESS; // Compliant

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID).
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the 2001:db8::/32 range, reserved for documentation purposes by RFC 3849

See

typescript:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
// All uses of process.stdin are security-sensitive and should be reviewed

process.stdin.on('readable', () => {
	const chunk = process.stdin.read(); // Sensitive
	if (chunk !== null) {
		dosomething(chunk);
	}
});

const readline = require('readline');
readline.createInterface({
	input: process.stdin // Sensitive
}).on('line', (input) => {
	dosomething(input);
});

See

typescript:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
var param = process.argv[2]; // Sensitive: check how the argument is used
console.log('Param: ' + param);

See

typescript:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Node.js

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by setting rejectUnauthorized to false. To enable validation set the value to true or do not set rejectUnauthorized at all to use the secure default value.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  rejectUnauthorized: false,
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
}); // Noncompliant
const tls = require('node:tls');

let options = {
    rejectUnauthorized: false,
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
}); // Noncompliant

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

typescript:S6268

Angular prevents XSS vulnerabilities by treating all values as untrusted by default. Untrusted values are systematically sanitized by the framework before they are inserted into the DOM.

Still, developers have the ability to manually mark a value as trusted if they are sure that the value is already sanitized. Accidentally trusting malicious data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

  • The value for which sanitization has been disabled is user-controlled.
  • It’s difficult to understand how this value is constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid including dynamic executable code and thus disabling Angular’s built-in sanitization unless it’s absolutely necessary. Try instead to rely as much as possible on static templates and Angular built-in sanitization to define web page content.
  • Make sure to understand how the value to consider as trusted is constructed and never concatenate it with user-controlled data.
  • Make sure to choose the correct DomSanitizer "bypass" method based on the context. For instance, only use bypassSecurityTrustUrl to trust urls in an href attribute context.

Sensitive Code Example

import { Component, OnInit } from '@angular/core';
import { DomSanitizer, SafeHtml } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello" [innerHTML]="hello"></div>'
})
export class HelloComponent implements OnInit {
  hello: SafeHtml;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    let name = this.route.snapshot.queryParams.name;
    let html = "<h1>Hello " + name + "</h1>";
    this.hello = this.sanitizer.bypassSecurityTrustHtml(html); // Sensitive
  }
}

Compliant Solution

import { Component, OnInit } from '@angular/core';
import { DomSanitizer } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello"><h1>Hello {{name}}</h1></div>',
})
export class HelloComponent implements OnInit {
  name: string;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    this.name = this.route.snapshot.queryParams.name;
  }
}

See

typescript:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For tar module:

const tar = require('tar');

tar.x({ // Sensitive
  file: 'foo.tar.gz'
});

For adm-zip module:

const AdmZip = require('adm-zip');

let zip = new AdmZip("./foo.zip");
zip.extractAllTo("."); // Sensitive

For jszip module:

const fs = require("fs");
const JSZip = require("jszip");

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) { // Sensitive
    zip.forEach(function (relativePath, zipEntry) {
      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(zipEntry.name);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          fs.writeFileSync(zipEntry.name, content);
        });
      }
    });
  });
});

For yauzl module

const yauzl = require('yauzl');

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  zipfile.on("entry", function(entry) {
    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

For extract-zip module:

const extract = require('extract-zip')

async function main() {
  let target = __dirname + '/test';
  await extract('test.zip', { dir: target }); // Sensitive
}
main();

Compliant Solution

For tar module:

const tar = require('tar');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;

tar.x({
  file: 'foo.tar.gz',
  filter: (path, entry) => {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    totalSize += entry.size;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    return true;
  }
});

For adm-zip module:

const AdmZip = require('adm-zip');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

let fileCount = 0;
let totalSize = 0;
let zip = new AdmZip("./foo.zip");
let zipEntries = zip.getEntries();
zipEntries.forEach(function(zipEntry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
    }

    let entrySize = zipEntry.getData().length;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
    }

    let compressionRatio = entrySize / zipEntry.header.compressedSize;
    if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
    }

    if (!zipEntry.isDirectory) {
        zip.extractEntryTo(zipEntry.entryName, ".");
    }
});

For jszip module:

const fs = require("fs");
const pathmodule = require("path");
const JSZip = require("jszip");

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;
let targetDirectory = __dirname + '/archive_tmp';

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) {
    zip.forEach(function (relativePath, zipEntry) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // Prevent ZipSlip path traversal (S6096)
      const resolvedPath = pathmodule.join(targetDirectory, zipEntry.name);
      if (!resolvedPath.startsWith(targetDirectory)) {
        throw 'Path traversal detected';
      }

      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(resolvedPath);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          totalSize += content.length;
          if (totalSize > MAX_SIZE) {
            throw 'Reached max. size';
          }

          fs.writeFileSync(resolvedPath, content);
        });
      }
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For yauzl module

const yauzl = require('yauzl');

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  let fileCount = 0;
  let totalSize = 0;

  zipfile.on("entry", function(entry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
    // Alternatively, calculate the size from the readStream.
    let entrySize = entry.uncompressedSize;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    if (entry.compressedSize > 0) {
      let compressionRatio = entrySize / entry.compressedSize;
      if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
      }
    }

    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For extract-zip module:

const extract = require('extract-zip')

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

async function main() {
  let fileCount = 0;
  let totalSize = 0;

  let target = __dirname + '/foo';
  await extract('foo.zip', {
    dir: target,
    onEntry: function(entry, zipfile) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
      // Alternatively, calculate the size from the readStream.
      let entrySize = entry.uncompressedSize;
      totalSize += entrySize;
      if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
      }

      if (entry.compressedSize > 0) {
        let compressionRatio = entrySize / entry.compressedSize;
        if (compressionRatio > THRESHOLD_RATIO) {
          throw 'Reached max. compression ratio';
        }
      }
    }
  });
}
main();

See

typescript:S6245

This rule is deprecated, and will eventually be removed.

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'default'
}); // Sensitive

Bucket encryption is disabled by default.

Compliant Solution

Server-side encryption with Amazon S3-Managed Keys is used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED
});

# Alternatively with a KMS key managed by the user.

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED,
    encryptionKey: access_key
});

See

typescript:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enforce HTTPS only access by setting enforceSSL property to true

Sensitive Code Example

S3 bucket objects access through TLS is not enforced by default:

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example'); // Sensitive

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example', {
    bucketName: 'example',
    versioned: true,
    publicReadAccess: false,
    enforceSSL: true
});

See

typescript:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PUBLIC_READ, PUBLIC_READ_WRITE grant respectively "read" and "read and write" privileges to anyone, either authenticated or anonymous (AllUsers group).
  • AUTHENTICATED_READ grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, i.e., to only grant users the necessary permissions for their required tasks. In the context of canned ACL, set it to PRIVATE (the default one), and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users, either authenticated or anonymous, have read and write permissions with the PUBLIC_READ_WRITE access control:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

Compliant Solution

With the PRIVATE access control (default), only the bucket owner has the read/write permissions on the bucket and its ACL.

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

See

typescript:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AnyPrincipal()] // Sensitive
}))

Compliant Solution

This policy allows only the authorized users:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AccountRootPrincipal()]
}))

See

typescript:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-explicit', {
      availabilityZone: 'us-west-2a',
      size: Size.gibibytes(1),
      encrypted: false // Sensitive
    });
import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-implicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
    }); // Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'encrypted-explicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
      encrypted: true
    });

See

typescript:S2817

This rule is deprecated, and will eventually be removed.

Why is this an issue?

The Web SQL Database standard never saw the light of day. It was first formulated, then deprecated by the W3C and was only implemented in some browsers. (It is not supported in Firefox or IE.)

Further, the use of a Web SQL Database poses security concerns, since you only need its name to access such a database.

Noncompliant code example

var db = window.openDatabase("myDb", "1.0", "Personal secrets stored here", 2*1024*1024);  // Noncompliant

Resources

typescript:S2819

Cross-origin communication allows different websites to interact with each other. This interaction is typically achieved through mechanisms like AJAX requests, WebSockets, or postMessage API. However, a vulnerability can arise when these communications are not properly secured by verifying their origins.

Why is this an issue?

Without origin verification, the target website cannot distinguish between legitimate requests from its own pages and malicious requests from an attacker’s site. The attacker can craft a malicious website or script that sends requests to a target website where the user is already authenticated.

This vulnerability class is not about a single specific user input or action, but rather a series of actions that lead to an insecure cross-origin communication.

What is the potential impact?

The absence of origin verification during cross-origin communications can lead to serious security issues.

Data Breach

If an attacker can successfully exploit this vulnerability, they may gain unauthorized access to sensitive data. For instance, a user’s personal information, financial details, or other confidential data could be exposed. This not only compromises the user’s privacy but can also lead to identity theft or financial loss.

Unauthorized Actions

An attacker could manipulate the communication between websites to perform actions on behalf of the user without their knowledge. This could range from making unauthorized purchases to changing user settings or even deleting accounts.

How to fix it

When sending a message, avoid using * for the target origin (it means no preference). Instead define it explicitly so the message will only be dispatched to this URI. When receiving the message, verify the orgin to be sure that it is sent by an authorized sender.

Code examples

Noncompliant code example

When sending a message:

var iframe = document.getElementById("testiframe");
iframe.contentWindow.postMessage("hello", "*"); // Noncompliant: * is used

When receiving a message:

window.addEventListener("message", function(event) { // Noncompliant: no checks are done on the origin property.
  console.log(event.data);
 });

Compliant solution

When sending a message:

var iframe = document.getElementById("testiframe");
iframe.contentWindow.postMessage("hello", "https://secure.example.com");

When receiving a message:

window.addEventListener("message", function(event) {
  if (event.origin !== "http://example.org")
    return;

  console.log(event.data)
});

Resources

Documentation

Standards

typescript:S6252

S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: false // Sensitive
});

The default value of versioned is false so the absence of this parameter is also sensitive.

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: true
});

See

typescript:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies or ACLs to be set on a S3 bucket the following booleans settings can be enabled:

  • blockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • ignorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • blockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • restrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

The other attribute BlockPublicAccess.BLOCK_ACLS only turns on blockPublicAcls and ignorePublicAcls. The public policies can still affect the S3 bucket.

However, all of those options can be enabled by setting the blockPublicAccess property of the S3 bucket to BlockPublicAccess.BLOCK_ALL.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • blockPublicAcls to True to block new attempts to set public ACLs.
  • ignorePublicAcls to True to block existing public ACLs.
  • blockPublicPolicy to True to block new attempts to set public policies.
  • restrictPublicBuckets to True to restrict existing public policies.

Sensitive Code Example

By default, when not set, the blockPublicAccess is fully deactivated (nothing is blocked):

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket'
}); // Sensitive

This block_public_access allows public ACL to be set:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : false, // Sensitive
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

The attribute BLOCK_ACLS only blocks and ignores public ACLs:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ACLS // Sensitive
});

Compliant Solution

This blockPublicAccess blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL
});

A similar configuration to the one above can be obtained by setting all parameters of the blockPublicAccess

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : true,
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

See

typescript:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

var mysql = require('mysql');

var connection = mysql.createConnection(
{
  host:'localhost',
  user: "admin",
  database: "project",
  password: "mypassword", // sensitive
  multipleStatements: true
});

connection.connect();

Compliant Solution

var mysql = require('mysql');

var connection = mysql.createConnection({
  host: process.env.MYSQL_URL,
  user: process.env.MYSQL_USERNAME,
  password: process.env.MYSQL_PASSWORD,
  database: process.env.MYSQL_DATABASE
});
connection.connect();

See

typescript:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

url = "http://example.com"; // Sensitive
url = "ftp://anonymous@example.com"; // Sensitive
url = "telnet://anonymous@example.com"; // Sensitive

For nodemailer:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: false, // Sensitive
  requireTLS: false // Sensitive
});
const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({}); // Sensitive

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': false // Sensitive
});

For telnet-client:

const Telnet = require('telnet-client'); // Sensitive

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-http-default', {
  port: 8080,
  open: true
}); // Sensitive

alb.addListener('listener-http-explicit', {
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-http-explicit-constructor', {
  loadBalancer: alb,
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

var listenerNLB = nlb.addListener('listener-tcp-default', {
  port: 1234
}); // Sensitive

listenerNLB = nlb.addListener('listener-tcp-explicit', {
  protocol: Protocol.TCP, // Sensitive
  port: 1234
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tcp-explicit-constructor', {
  loadBalancer: nlb,
  protocol: Protocol.TCP, // Sensitive
  port: 8080
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-http', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "HTTP", // Sensitive
  port: 80
});

new CfnListener(this, 'listener-tcp', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "TCP", // Sensitive
  port: 80
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-tcp', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'tcp' // Sensitive
  }]
});

new CfnLoadBalancer(this, 'elb-http', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'http' // Sensitive
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

const loadBalancer = new LoadBalancer(this, 'elb-tcp-dict', {
    vpc,
    internetFacing: true,
    healthCheck: {
    port: 80,
    },
    listeners: [
    {
        externalPort:10000,
        externalProtocol: LoadBalancingProtocol.TCP, // Sensitive
        internalPort:10000
    }]
});

loadBalancer.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.TCP, // Sensitive
  internalPort:10001
});
loadBalancer.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTP, // Sensitive
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'unencrypted-implicit', {
  replicationGroupDescription: 'exampleDescription'
}); // Sensitive

new CfnReplicationGroup(this, 'unencrypted-explicit', {
  replicationGroupDescription: 'exampleDescription',
  transitEncryptionEnabled: false // Sensitive
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-implicit-unencrytped', undefined); // Sensitive

new CfnStream(this, 'cfnstream-explicit-unencrytped', {
  streamEncryption: undefined // Sensitive
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-explicit-unencrypted', {
  encryption: StreamEncryption.UNENCRYPTED // Sensitive
});

Compliant Solution

url = "https://example.com";
url = "sftp://anonymous@example.com";
url = "ssh://anonymous@example.com";

For nodemailer one of the following options must be set:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: true,
  requireTLS: true,
  port: 465,
  secured: true
});

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-https-explicit', {
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

alb.addListener('listener-https-implicit', {
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-https-explicit', {
  loadBalancer: loadBalancer,
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

nlb.addListener('listener-tls-explicit', {
  protocol: Protocol.TLS,
  port: 1234,
  certificates: [certificate]
});

nlb.addListener('listener-tls-implicit', {
  port: 1234,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tls-explicit', {
  loadBalancer: loadBalancer,
  protocol: Protocol.TLS,
  port: 8080,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-https', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "HTTPS",
  port: 80
  certificates: [certificate]
});

new CfnListener(this, 'listener-tls', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "TLS",
  port: 80
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-ssl', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'ssl',
    sslCertificateId: sslCertificateId
  }]
});

new CfnLoadBalancer(this, 'elb-https', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'https',
    sslCertificateId: sslCertificateId
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer, LoadBalancingProtocol } from 'aws-cdk-lib/aws-elasticloadbalancing';

const lb = new LoadBalancer(this, 'elb-ssl', {
  vpc,
  internetFacing: true,
  healthCheck: {
    port: 80,
  },
  listeners: [
    {
      externalPort:10000,
      externalProtocol:LoadBalancingProtocol.SSL,
      internalPort:10000
    }]
});

lb.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.SSL,
  internalPort:10001
});
lb.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTPS,
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'encrypted-explicit', {
  replicationGroupDescription: 'example',
  transitEncryptionEnabled: true
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-implicit-encrypted');

new Stream(this, 'stream-explicit-encrypted-selfmanaged', {
  encryption: StreamEncryption.KMS,
  encryptionKey: encryptionKey,
});

new Stream(this, 'stream-explicit-encrypted-managed', {
  encryption: StreamEncryption.MANAGED
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-explicit-encrypted', {
  streamEncryption: {
    encryptionType: encryptionType,
    keyId: encryptionKey.keyId,
  }
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

typescript:S6299

Vue.js framework prevents XSS vulnerabilities by automatically escaping HTML contents with the use of native API browsers like innerText instead of innerHtml.

It’s still possible to explicity use innerHtml and similar APIs to render HTML. Accidentally rendering malicious HTML data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

The application needs to render HTML content which:

  • could be user-controlled and not previously sanitized.
  • is difficult to understand how it was constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid injecting HTML content with v-html directive unless the content can be considered 100% safe, instead try to rely as much as possible on built-in auto-escaping Vue.js features.
  • Take care when using the v-bind:href directive to set URLs which can contain malicious Javascript (javascript:onClick(...)).
  • Event directives like :onmouseover are also prone to Javascript injection and should not be used with unsafe values.

Sensitive Code Example

When using Vue.js templates, the v-html directive enables HTML rendering without any sanitization:

<div v-html="htmlContent"></div> <!-- Noncompliant -->

When using a rendering function, the innerHTML attribute enables HTML rendering without any sanitization:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerHTML: this.htmlContent, // Noncompliant
        }
      }
    );
  },
});

When using JSX, the domPropsInnerHTML attribute enables HTML rendering without any sanitization:

<div domPropsInnerHTML={this.htmlContent}></div> <!-- Noncompliant -->

Compliant Solution

When using Vue.js templates, putting the content as a child node of the element is safe:

<div>{{ htmlContent }}</div>

When using a rendering function, using the innerText attribute or putting the content as a child node of the element is safe:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerText: this.htmlContent,
        }
      },
      this.htmlContent // Child node
    );
  },
});

When using JSX, putting the content as a child node of the element is safe:

<div>{this.htmlContent}</div>

See

typescript:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

The wildcard "*" is specified as the resource for this PolicyStatement. This grants the update permission for all policies of the account:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["*"] // Sensitive
        })
    ]
})

Compliant Solution

Restrict the update permission to the appropriate subset of policies:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["arn:aws:iam:::policy/team1/*"]
        })
    ]
})

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

typescript:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive

Compliant Solution

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT name FROM users WHERE id = ?', [userinput], (err, res) => {});
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT name FROM users WHERE id = $1', [userinput], (err, res) => {});

Exceptions

This rule’s current implementation does not follow variables. It will only detect SQL queries which are formatted directly in the function call.

const sql = 'SELECT * FROM users WHERE id = ' + userinput;
mycon.query(sql, (err, res) => {}); // Sensitive but no issue is raised.

See

typescript:S5691

Hidden files are created automatically by many tools to save user-preferences, well-known examples are .profile, .bashrc, .bash_history or .git. To simplify the view these files are not displayed by default using operating system commands like ls.

Outside of the user environment, hidden files are sensitive because they are used to store privacy-related information or even hard-coded secrets.

Ask Yourself Whether

  • Hidden files may have been inadvertently uploaded to the static server’s public directory and it accepts requests to hidden files.
  • There is no business use cases linked to serve files in .name format but the server is not configured to reject requests to this type of files.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Disable the serving of hidden files.

Sensitive Code Example

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'allow'});   // Sensitive
app.use(serveStaticMiddleware);

Compliant Solution

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'ignore'});   // Compliant: ignore or deny are recommended values
let serveStaticDefault = serveStatic('public', { 'index': false});   // Compliant: by default, "dotfiles" (file or directory that begins with a dot) are not served (with the exception that files within a directory that begins with a dot are not ignored), see serve-static module documentation
app.use(serveStaticMiddleware);

See

typescript:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB

const formDefault = new Formidable(); // Sensitive, the default value is 200MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
    fileSize: 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB
  }
});

let diskUploadUnlimited = multer({ // Sensitive: the default value is no limit
  storage: diskStorage,
});

body-parser module:

// 4MB is more than the recommended limit of 2MB for non-file-upload requests
let jsonParser = bodyParser.json({ limit: "4mb" }); // Sensitive
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "4mb" }); // Sensitive

Compliant Solution

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 8000000; // Compliant: 8MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
     fileSize: 8000000 // Compliant: 8MB
  }
});

body-parser module:

let jsonParser = bodyParser.json(); // Compliant, when the limit is not defined, the default value is set to 100kb
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "2mb" }); // Compliant

See

typescript:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy that grants all permissions by using the wildcard (*) in the Action property:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["*"], // Sensitive
    resources: ["arn:aws:iam:::user/*"],
})

Compliant Solution

A customer-managed policy that grants only the required permissions:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["iam:GetAccountSummary"],
    resources: ["arn:aws:iam:::user/*"],
})

See

typescript:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';
import { aws_ec2 as ec2 } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: false, // Sensitive
});

Compliant Solution

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: true,
});

See

typescript:S6308

Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated.

To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, adversaries cannot access the data if they gain physical access to the storage medium.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt OpenSearch domains that contain sensitive information.

OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
}); // Sensitive, encryption must be explicitly enabled

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
}); // Sensitive, encryption must be explicitly enabled

Compliant Solution

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
  encryptionAtRest: {
    enabled: true,
  },
});

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
  encryptionAtRestOptions: {
    enabled: true,
  },
});

See

typescript:S6317

Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access.

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

For such policies, it is easy to define very broad permissions (by using wildcard "*" permissions for example.) This is especially true if it is not yet clear which permissions will be required for a specific workload or use case. However, it is important to limit the amount of permissions that are granted and the amount of resources to which these permissions are granted. Doing so ensures that there are no users or roles that have more permissions than they need.

If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities.

What is the potential impact?

AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope.

Privilege escalation

When IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities.

For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account.

How to fix it in AWS CDK

Code examples

In this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

Noncompliant code example

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["*"], // Noncompliant
    })],
});

Compliant solution

The policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed.

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["arn:aws:lambda:us-east-2:123456789012:function:my-function:1"],
    })],
});

How does this work?

Principle of least privilege

When creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else.

To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used.

Resources

Documentation

Articles & blog posts

Standards

typescript:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

new CfnNotebookInstance(this, 'example', {
      instanceType: 'instanceType',
      roleArn: 'roleArn'
}); // Sensitive

Compliant Solution

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

const encryptionKey = new Key(this, 'example', {
    enableKeyRotation: true,
});
new CfnNotebookInstance(this, 'example', {
    instanceType: 'instanceType',
    roleArn: 'roleArn',
    kmsKeyId: encryptionKey.keyId
});

See

typescript:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

const fs = require('fs');

let tmp_file = "/tmp/temporary_file"; // Sensitive
fs.readFile(tmp_file, 'utf8', function (err, data) {
  // ...
});
const fs = require('fs');

let tmp_dir = process.env.TMPDIR; // Sensitive
fs.readFile(tmp_dir + "/temporary_file", 'utf8', function (err, data) {
  // ...
});

Compliant Solution

const tmp = require('tmp');

const tmpobj = tmp.fileSync(); // Compliant

See

typescript:S5689

Disclosure of version information, usually overlooked by developers but disclosed by default by the systems and frameworks in use, can pose a significant security risk depending on the production environement.

Once this information is public, attackers can use it to identify potential security holes or vulnerabilities specific to that version.

Furthermore, if the published version information indicates the use of outdated or unsupported software, it becomes easier for attackers to exploit known vulnerabilities. They can search for published vulnerabilities related to that version and launch attacks that specifically target those vulnerabilities.

Ask Yourself Whether

  • Version information is accessible to end users.
  • Internal systems do not benefit from timely patch management workflows.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

In general, it is recommended to keep internal technical information within internal systems to control what attackers know about the underlying architectures. This is known as the "need to know" principle.

The most effective solution is to remove version information disclosure from what end users can see, such as the "x-powered-by" header.
This can be achieved directly through the web application code, server (nginx, apache) or firewalls.

Disabling the server signature provides additional protection by reducing the amount of information available to attackers. Note, however, that this does not provide as much protection as regular updates and patches.
Security by obscurity is the least foolproof solution of all. It should never be the only defense mechanism and should always be combined with other security measures.

Sensitive Code Example

In Express.js, version information is disclosed by default in the x-powered-by HTTP header:

let express = require('express');

let example = express(); // Sensitive

example.get('/', function (req, res) {
  res.send('example')
});

Compliant Solution

x-powered-by HTTP header should be disabled in Express.js with app.disable:

let express = require('express');

let example = express();
example.disable("x-powered-by");

Or with helmet’s hidePoweredBy middleware:

let helmet = require("helmet");

let example = express();
example.use(helmet.hidePoweredBy());

See

typescript:S5148

A newly opened window having access back to the originating window could allow basic phishing attacks (the window.opener object is not null and thus window.opener.location can be set to a malicious website by the opened page).

For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials.

Ask Yourself Whether

  • The application opens untrusted external URL.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use noopener to prevent untrusted pages from abusing window.opener.

Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ target=_blank on anchors implies rel=noopener which make the protection enabled by default.

Sensitive Code Example

window.open("https://example.com/dangerous"); // Sensitive

Compliant Solution

window.open("https://example.com/dangerous", "WindowName", "noopener");

See

typescript:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_sns.Topic

import { Topic } from 'aws-cdk-lib/aws-sns';

new Topic(this, 'exampleTopic'); // Sensitive

For aws_cdk.aws_sns.CfnTopic

import { Topic, CfnTopic } from 'aws-cdk-lib/aws-sns';

new CfnTopic(this, 'exampleCfnTopic'); // Sensitive

Compliant Solution

For aws_cdk.aws_sns.Topic

import { Topic } from 'aws-cdk-lib/aws-sns';

const encryptionKey = new Key(this, 'exampleKey', {
    enableKeyRotation: true,
});

new Topic(this, 'exampleTopic', {
    masterKey: encryptionKey
});

For aws_cdk.aws_sns.CfnTopic

import { CfnTopic } from 'aws-cdk-lib/aws-sns';

const encryptionKey = new Key(this, 'exampleKey', {
    enableKeyRotation: true,
});

cfnTopic = new CfnTopic(this, 'exampleCfnTopic', {
    kmsMasterKeyId: encryptionKey.keyId
});

See

typescript:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

For aws-cdk-lib.aws_ec2.Instance and similar constructs:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.Instance(this, "example", {
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    vpcSubnets: {subnetType: ec2.SubnetType.PUBLIC} // Sensitive
})

For aws-cdk-lib.aws_ec2.CfnInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnInstance(this, "example", {
    instanceType: "t2.micro",
    imageId: "ami-0ea0f26a6d50850c5",
    networkInterfaces: [
        {
            deviceIndex: "0",
            associatePublicIpAddress: true, // Sensitive
            deleteOnTermination: true,
            subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PUBLIC}).subnetIds[0]
        }
    ]
})

For aws-cdk-lib.aws_dms.CfnReplicationInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new dms.CfnReplicationInstance(
    this, "example", {
    replicationInstanceClass: "dms.t2.micro",
    allocatedStorage: 5,
    publiclyAccessible: true, // Sensitive
    replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier,
    vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup]
})

For aws-cdk-lib.aws_rds.CfnDBInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const rdsSubnetGroupPublic = new rds.CfnDBSubnetGroup(this, "publicSubnet", {
    dbSubnetGroupDescription: "Subnets",
    dbSubnetGroupName: "publicSn",
    subnetIds: vpc.selectSubnets({
        subnetType: ec2.SubnetType.PUBLIC
    }).subnetIds
})

new rds.CfnDBInstance(this, "example", {
    engine: "postgres",
    masterUsername: "foobar",
    masterUserPassword: "12345678",
    dbInstanceClass: "db.r5.large",
    allocatedStorage: "200",
    iops: 1000,
    dbSubnetGroupName: rdsSubnetGroupPublic.ref,
    publiclyAccessible: true, // Sensitive
    vpcSecurityGroups: [sg.securityGroupId]
})

Compliant Solution

For aws-cdk-lib.aws_ec2.Instance and similar constructs:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.Instance(
    this,
    "example", {
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    vpcSubnets: {subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}
})

For aws-cdk-lib.aws_ec2.CfnInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnInstance(this, "example", {
    instanceType: "t2.micro",
    imageId: "ami-0ea0f26a6d50850c5",
    networkInterfaces: [
        {
            deviceIndex: "0",
            associatePublicIpAddress: false,
            deleteOnTermination: true,
            subnetId: vpc.selectSubnets({subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS}).subnetIds[0]
        }
    ]
})

For aws-cdk-lib.aws_dms.CfnReplicationInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new dms.CfnReplicationInstance(
    this, "example", {
    replicationInstanceClass: "dms.t2.micro",
    allocatedStorage: 5,
    publiclyAccessible: false,
    replicationSubnetGroupIdentifier: subnetGroup.replicationSubnetGroupIdentifier,
    vpcSecurityGroupIds: [vpc.vpcDefaultSecurityGroup]
})

For aws-cdk-lib.aws_rds.CfnDBInstance:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const rdsSubnetGroupPrivate = new rds.CfnDBSubnetGroup(this, "example",{
    dbSubnetGroupDescription: "Subnets",
    dbSubnetGroupName: "privateSn",
    subnetIds: vpc.selectSubnets({
        subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS
    }).subnetIds
})

new rds.CfnDBInstance(this, "example", {
    engine: "postgres",
    masterUsername: "foobar",
    masterUserPassword: "12345678",
    dbInstanceClass: "db.r5.large",
    allocatedStorage: "200",
    iops: 1000,
    dbSubnetGroupName: rdsSubnetGroupPrivate.ref,
    publiclyAccessible: false,
    vpcSecurityGroups: [sg.securityGroupId]
})

See

typescript:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

const cp = require('child_process');
cp.exec('file.exe'); // Sensitive

Compliant Solution

const cp = require('child_process');
cp.exec('/usr/bin/file.exe'); // Compliant

See

typescript:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

mustache.js template engine:

let Mustache = require("mustache");

Mustache.escape = function(text) {return text;}; // Sensitive

let rendered = Mustache.render(template, { name: inputName });

handlebars.js template engine:

const Handlebars = require('handlebars');

let source = "<p>attack {{name}}</p>";

let template = Handlebars.compile(source, { noEscape: true }); // Sensitive

markdown-it markup language parser:

const markdownIt = require('markdown-it');
let md = markdownIt({
  html: true // Sensitive
});

let result = md.render('# <b>attack</b>');

marked markup language parser:

const marked = require('marked');

marked.setOptions({
  renderer: new marked.Renderer(),
  sanitize: false // Sensitive
});

console.log(marked("# test <b>attack/b>"));

kramed markup language parser:

let kramed = require('kramed');

var options = {
  renderer: new kramed.Renderer({
    sanitize: false // Sensitive
  })
};

Compliant Solution

mustache.js template engine:

let Mustache = require("mustache");

let rendered = Mustache.render(template, { name: inputName }); // Compliant autoescaping is on by default

handlebars.js template engine:

const Handlebars = require('handlebars');

let source = "<p>attack {{name}}</p>";
let data = { "name": "<b>Alan</b>" };

let template = Handlebars.compile(source); // Compliant by default noEscape is set to false

markdown-it markup language parser:

let md = require('markdown-it')(); // Compliant by default html is set to false

let result = md.render('# <b>attack</b>');

marked markup language parser:

const marked = require('marked');

marked.setOptions({
  renderer: new marked.Renderer()
}); // Compliant by default sanitize is set to true

console.log(marked("# test <b>attack/b>"));

kramed markup language parser:

let kramed = require('kramed');

let options = {
  renderer: new kramed.Renderer({
    sanitize: true // Compliant
  })
};

console.log(kramed('Attack [xss?](javascript:alert("xss")).', options));

See

typescript:S6321

Why is this an issue?

Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

For aws-cdk-lib.aws_ec2.Instance and other constructs that support a connections attribute:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const instance = new ec2.Instance(this, "default-own-security-group",{
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    instanceName: "test-instance"
})

instance.connections.allowFrom(
    ec2.Peer.anyIpv4(), // Noncompliant
    ec2.Port.tcp(22),
    /*description*/ "Allows SSH from all IPv4"
)

For aws-cdk-lib.aws_ec2.SecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const securityGroup = new ec2.SecurityGroup(this, "custom-security-group", {
    vpc: vpc
})

securityGroup.addIngressRule(
    ec2.Peer.anyIpv4(), // Noncompliant
    ec2.Port.tcpRange(1, 1024)
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroup(
    this,
    "cfn-based-security-group", {
        groupDescription: "cfn based security group",
        groupName: "cfn-based-security-group",
        vpcId: vpc.vpcId,
        securityGroupIngress: [
            {
                ipProtocol: "6",
                cidrIp: "0.0.0.0/0", // Noncompliant
                fromPort: 22,
                toPort: 22
            }
        ]
    }
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroupIngress( // Noncompliant
    this,
    "ingress-all-ip-tcp-ssh", {
        ipProtocol: "tcp",
        cidrIp: "0.0.0.0/0",
        fromPort: 22,
        toPort: 22,
        groupId: securityGroup.attrGroupId
})

Compliant solution

For aws-cdk-lib.aws_ec2.Instance and other constructs that support a connections attribute:

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const instance = new ec2.Instance(this, "default-own-security-group",{
    instanceType: nanoT2,
    machineImage: ec2.MachineImage.latestAmazonLinux(),
    vpc: vpc,
    instanceName: "test-instance"
})

instance.connections.allowFrom(
    ec2.Peer.ipv4("192.0.2.0/24"),
    ec2.Port.tcp(22),
    /*description*/ "Allows SSH from a trusted range"
)

For aws-cdk-lib.aws_ec2.SecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

const securityGroup3 = new ec2.SecurityGroup(this, "custom-security-group", {
    vpc: vpc
})

securityGroup3.addIngressRule(
    ec2.Peer.anyIpv4(),
    ec2.Port.tcpRange(1024, 1048)
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroup

import {aws_ec2 as ec2} from 'aws-cdk-lib'

new ec2.CfnSecurityGroup(
    this,
    "cfn-based-security-group", {
        groupDescription: "cfn based security group",
        groupName: "cfn-based-security-group",
        vpcId: vpc.vpcId,
        securityGroupIngress: [
            {
                ipProtocol: "6",
                cidrIp: "192.0.2.0/24",
                fromPort: 22,
                toPort: 22
            }
        ]
    }
)

For aws-cdk-lib.aws_ec2.CfnSecurityGroupIngress

new ec2.CfnSecurityGroupIngress(
    this,
    "ingress-all-ipv4-tcp-http", {
        ipProtocol: "6",
        cidrIp: "0.0.0.0/0",
        fromPort: 80,
        toPort: 80,
        groupId: securityGroup.attrGroupId
    }
)

Resources

Documentation

Standards

typescript:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws-sqs.Queue

import { Queue } from 'aws-cdk-lib/aws-sqs';

new Queue(this, 'example'); // Sensitive

For aws-cdk-lib.aws-sqs.CfnQueue

import { CfnQueue } from 'aws-cdk-lib/aws-sqs';

new CfnQueue(this, 'example'); // Sensitive

Compliant Solution

For aws-cdk-lib.aws-sqs.Queue

import { Queue } from 'aws-cdk-lib/aws-sqs';

new Queue(this, 'example', {
    encryption: QueueEncryption.KMS_MANAGED
});

For aws-cdk-lib.aws-sqs.CfnQueue

import { CfnQueue } from 'aws-cdk-lib/aws-sqs';

const encryptionKey = new Key(this, 'example', {
    enableKeyRotation: true,
});

new CfnQueue(this, 'example', {
    kmsMasterKeyId: encryptionKey.keyId
});

See

typescript:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

For aws-cdk-lib.aws_apigateway.Resource:

import {aws_apigateway as apigateway} from "aws-cdk-lib"

const resource = api.root.addResource("example")
resource.addMethod(
    "GET",
    new apigateway.HttpIntegration("https://example.org"),
    {
        authorizationType: apigateway.AuthorizationType.NONE // Sensitive
    }
)

For aws-cdk-lib.aws_apigatewayv2.CfnRoute:

import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib"

new apigateway.CfnRoute(this, "no-auth", {
    apiId: api.ref,
    routeKey: "GET /no-auth",
    authorizationType: "NONE", // Sensitive
    target: exampleIntegration
})

Compliant Solution

For aws-cdk-lib.aws_apigateway.Resource:

import {aws_apigateway as apigateway} from "aws-cdk-lib"

const resource = api.root.addResource("example",{
    defaultMethodOptions:{
        authorizationType: apigateway.AuthorizationType.IAM
    }
})
resource.addMethod(
    "POST",
    new apigateway.HttpIntegration("https://example.org"),
    {
        authorizationType: apigateway.AuthorizationType.IAM
    }
)
resource.addMethod(  // authorizationType is inherited from the Resource's configured defaultMethodOptions
    "GET"
)

For aws-cdk-lib.aws_apigatewayv2.CfnRoute:

import {aws_apigatewayv2 as apigateway} from "aws-cdk-lib"

new apigateway.CfnRoute(this, "auth", {
    apiId: api.ref,
    routeKey: "POST /auth",
    authorizationType: "AWS_IAM",
    target: exampleIntegration
})

See

typescript:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

cookie-session module:

let session = cookieSession({
  secure: false,// Sensitive
});  // Sensitive

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    secure: false // Sensitive
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  secure: false // Sensitive
}); // Sensitive

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { secure: false }}); // Sensitive

Compliant Solution

cookie-session module:

let session = cookieSession({
  secure: true,// Compliant
});  // Compliant

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    secure: true // Compliant
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  secure: true // Compliant
}); // Compliant

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { secure: true }}); // Compliant

See

typescript:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

nodejs http built-in module:

const http = require('http');
const srv = http.createServer((req, res) => {
  res.writeHead(200, { 'Access-Control-Allow-Origin': '*' }); // Sensitive
  res.end('ok');
});
srv.listen(3000);

Express.js framework with cors middleware:

const cors = require('cors');

let app1 = express();
app1.use(cors()); // Sensitive: by default origin is set to *

let corsOptions = {
  origin: '*' // Sensitive
};

let app2 = express();
app2.use(cors(corsOptions));

User-controlled origin:

function (req, res) {
  const origin = req.header('Origin');
  res.setHeader('Access-Control-Allow-Origin', origin); // Sensitive
};

Compliant Solution

nodejs http built-in module:

const http = require('http');
const srv = http.createServer((req, res) => {
  res.writeHead(200, { 'Access-Control-Allow-Origin': 'trustedwebsite.com' }); // Compliant
  res.end('ok');
});
srv.listen(3000);

Express.js framework with cors middleware:

const cors = require('cors');

let corsOptions = {
  origin: 'trustedwebsite.com' // Compliant
};

let app = express();
app.use(cors(corsOptions));

User-controlled origin validated with an allow-list:

function (req, res) {
  const origin = req.header('Origin');

  if (trustedOrigins.indexOf(origin) >= 0) {
    res.setHeader('Access-Control-Allow-Origin', origin);
  }
};

See

typescript:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_cdk.aws_efs.FileSystem

import { FileSystem } from 'aws-cdk-lib/aws-efs';

new FileSystem(this, 'unencrypted-explicit', {
    vpc: new Vpc(this, 'VPC'),
    encrypted: false // Sensitive
});

For aws_cdk.aws_efs.CfnFileSystem

import { CfnFileSystem } from 'aws-cdk-lib/aws-efs';

new CfnFileSystem(this, 'unencrypted-implicit-cfn', {
}); // Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_efs.FileSystem

import { FileSystem } from 'aws-cdk-lib/aws-efs';

new FileSystem(this, 'encrypted-explicit', {
    vpc: new Vpc(this, 'VPC'),
    encrypted: true
});

For aws_cdk.aws_efs.CfnFileSystem

import { CfnFileSystem } from 'aws-cdk-lib/aws-efs';

new CfnFileSystem(this, 'encrypted-explicit-cfn', {
    encrypted: true
});

See

csharpsquid:S2115

When accessing a database, an empty password should be avoided as it introduces a weakness.

Why is this an issue?

When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials.

What is the potential impact?

Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains.

Unauthorized Access to Sensitive Data

When a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage.

Compromise of System Integrity

Without a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks.

Unwanted Modifications or Deletions

The absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences.

Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm.

How to fix it in Entity Framework Core

Code examples

The following code uses an empty password to connect to a SQL Server database.

The vulnerability can be fixed by using Windows authentication (sometimes referred to as integrated security).

Noncompliant code example

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
  optionsBuilder.UseSqlServer("Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password="); // Noncompliant
}

Compliant solution

protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
  optionsBuilder.UseSqlServer("Server=myServerAddress;Database=myDataBase;Integrated Security=True");
}

How does this work?

Windows authentication (integrated security)

When the connection string includes the Integrated Security=true parameter, it enables Windows authentication (sometimes called integrated security) for the database connection. With integrated security, the user’s Windows credentials are used to authenticate and authorize access to the database. It eliminates the need for a separate username and password for the database connection. Integrated security simplifies authentication and leverages the existing Windows authentication infrastructure for secure database access in your C# application.

It’s important to note that when using integrated security, the user running the application must have the necessary permissions to access the database. Ensure that the user account running the application has the appropriate privileges and is granted access to the database.

The syntax employed in connection strings varies by provider:

Syntax

Supported by

Integrated Security=true;

SQL Server, Oracle, Postgres

Integrated Security=SSPI;

SQL Server, OLE DB

Integrated Security=yes;

MySQL

Trusted_Connection=true;

SQL Server

Trusted_Connection=yes;

ODBC

Note: Some providers such as MySQL do not support Windows authentication with .NET Core.

Pitfalls

Hard-coded passwords

It could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:

  1. Security Vulnerability: Hard-coded passwords can be easily discovered by anyone who has access to the code, such as other developers or attackers. This can lead to unauthorized access to the database and potential data breaches.
  2. Lack of Flexibility: Hard-coded passwords make it difficult to change the password without modifying the code. If the password needs to be updated, it would require recompiling and redeploying the code, which can be time-consuming and error-prone.
  3. Version Control Issues: Storing passwords in code can lead to version control issues. If the code is shared or stored in a version control system, the password will be visible to anyone with access to the repository, which is a security risk.

To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase.

Resources

Standards

csharpsquid:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

Noncompliant code example

using System.IO;
using System.Security.Cryptography;

public void Encrypt(byte[] key, byte[] dataToEncrypt, MemoryStream target)
{
    var aes = new AesCryptoServiceProvider();

    byte[] iv     = new byte[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 };
    var encryptor = aes.CreateEncryptor(key, iv); // Noncompliant

    var cryptoStream = new CryptoStream(target, encryptor, CryptoStreamMode.Write);
    var swEncrypt    = new StreamWriter(cryptoStream);

    swEncrypt.Write(dataToEncrypt);
}

Compliant solution

In this example, the code implicitly uses a number generator that is considered strong, thanks to aes.IV.

using System.IO;
using System.Security.Cryptography;

public void Encrypt(byte[] key, byte[] dataToEncrypt, MemoryStream target)
{
    var aes = new AesCryptoServiceProvider();

    var encryptor = aes.CreateEncryptor(key, aes.IV);

    var cryptoStream = new CryptoStream(target, encryptor, CryptoStreamMode.Write);
    var swEncrypt    = new StreamWriter(cryptoStream);

    swEncrypt.Write(dataToEncrypt);
}

How does this work?

Use unique IVs

To ensure high security, initialization vectors must meet two important criteria:

  • IVs must be unique for each encryption operation.
  • For CBC and CFB modes, a secure FIPS-compliant random number generator should be used to generate unpredictable IVs.

The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

csharpsquid:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

public void ConfigureServices(IServiceCollection services)
{
    // ...
    services.AddControllersWithViews(options => options.Filters.Add(new IgnoreAntiforgeryTokenAttribute())); // Sensitive
    // ...
}
[HttpPost, IgnoreAntiforgeryToken] // Sensitive
public IActionResult ChangeEmail(ChangeEmailModel model) => View("~/Views/...");

Compliant Solution

public void ConfigureServices(IServiceCollection services)
{
    // ...
    services.AddControllersWithViews(options => options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute()));
    // or
    services.AddControllersWithViews(options => options.Filters.Add(new ValidateAntiForgeryTokenAttribute()));
    // ...
}
[HttpPost]
[AutoValidateAntiforgeryToken]
public IActionResult ChangeEmail(ChangeEmailModel model) => View("~/Views/...");

See

csharpsquid:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers.

The .Net Core framework offers multiple features which help during debug. Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDeveloperExceptionPage and Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDatabaseErrorPage are two of them. Make sure that those features are disabled in production.

Use if (env.IsDevelopment()) to disable debug code.

Sensitive Code Example

This rule raises issues when the following .Net Core methods are called: Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDeveloperExceptionPage, Microsoft.AspNetCore.Builder.IApplicationBuilder.UseDatabaseErrorPage.

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;

namespace mvcApp
{
    public class Startup2
    {
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            // Those calls are Sensitive because it seems that they will run in production
            app.UseDeveloperExceptionPage(); // Sensitive
            app.UseDatabaseErrorPage(); // Sensitive
        }
    }
}

Compliant Solution

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;

namespace mvcApp
{
    public class Startup2
    {
        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                // The following calls are ok because they are disabled in production
                app.UseDeveloperExceptionPage(); // Compliant
                app.UseDatabaseErrorPage(); // Compliant
            }
        }
    }
}

Exceptions

This rule does not analyze configuration files. Make sure that debug mode is not enabled by default in those files.

See

csharpsquid:S5773

Deserialization is the process of converting serialized data (such as objects or data structures) back into their original form. Types allowed to be unserialized should be strictly controlled.

Why is this an issue?

During the deserialization process, the state of an object will be reconstructed from the serialized data stream. By allowing unrestricted deserialization of types, the application makes it possible for attackers to use types with dangerous or otherwise sensitive behavior during the deserialization process.

What is the potential impact?

When an application deserializes untrusted data without proper restrictions, an attacker can craft malicious serialized objects. Depending on the affected objects and properties, the consequences can vary.

Remote Code Execution

If attackers can craft malicious serialized objects that contain executable code, this code will run within the application’s context, potentially gaining full control over the system. This can lead to unauthorized access, data breaches, or even complete system compromise.

For example, a well-known attack vector consists in serializing an object of type TempFileCollection with arbitrary files (defined by an attacker) which will be deleted on the application deserializing this object (when the finalize() method of the TempFileCollection object is called). These kinds of specially crafted serialized objects are called "gadgets".

Privilege escalation

Unrestricted deserialization can also enable attackers to escalate their privileges within the application. By manipulating the serialized data, an attacker can modify object properties or bypass security checks, granting them elevated privileges that they should not have. This can result in unauthorized access to sensitive data, unauthorized actions, or even administrative control over the application.

Denial of Service

In some cases, an attacker can abuse the deserialization process to cause a denial of service (DoS) condition. By providing specially crafted serialized data, the attacker can trigger excessive resource consumption, leading to system instability or unresponsiveness. This can disrupt the availability of the application, impacting its functionality and causing inconvenience to users.

How to fix it

Code examples

Noncompliant code example

With BinaryFormatter, NetDataContractSerializer or SoapFormatter:

var myBinaryFormatter = new BinaryFormatter();
myBinaryFormatter.Deserialize(stream); // Noncompliant

With JavaScriptSerializer:

JavaScriptSerializer serializer1 = new JavaScriptSerializer(new SimpleTypeResolver()); // Noncompliant
serializer1.Deserialize<ExpectedType>(json);

Compliant solution

With BinaryFormatter, NetDataContractSerializer or SoapFormatter:

sealed class CustomBinder : SerializationBinder
{
   public override Type BindToType(string assemblyName, string typeName)
   {
       if (!(typeName == "type1" || typeName == "type2" || typeName == "type3"))
       {
          throw new SerializationException("Only type1, type2 and type3 are allowed");
       }
       return Assembly.Load(assemblyName).GetType(typeName);
   }
}

var myBinaryFormatter = new BinaryFormatter();
myBinaryFormatter.Binder = new CustomBinder();
myBinaryFormatter.Deserialize(stream);

With JavaScriptSerializer:

public class CustomSafeTypeResolver : JavaScriptTypeResolver
{
   public override Type ResolveType(string id)
   {
      if(id != "ExpectedType") {
         throw new ArgumentNullException("Only ExpectedType is allowed during deserialization");
      }
      return Type.GetType(id);
   }
}

JavaScriptSerializer serializer = new JavaScriptSerializer(new CustomSafeTypeResolver());
serializer.Deserialize<ExpectedType>(json);

Going the extra mile

Instead of using BinaryFormatter and similar serializers, it is recommended to use safer alternatives in most of the cases, such as XmlSerializer or DataContractSerializer.

If it’s not possible then try to mitigate the risk by restricting the types allowed to be deserialized:

  • by implementing an "allow-list" of types, but keep in mind that novel dangerous types are regularly discovered and this protection could be insufficient over time.
  • or/and implementing a tamper protection, such as message authentication codes (MAC). This way only objects serialized with the correct MAC hash will be deserialized.

Resources

Documentation

Articles & blog posts

Standards

csharpsquid:S4211

Transparency attributes in the .NET Framework, designed to protect security-critical operations, can lead to ambiguities and vulnerabilities when declared at different levels such as both for the class and a method.

Why is this an issue?

Transparency attributes can be declared at several levels. If two different attributes are declared at two different levels, the attribute that prevails is the one in the highest level. For example, you can declare that a class is SecuritySafeCritical and that a method of this class is SecurityCritical. In this case, the method will be SecuritySafeCritical and the SecurityCritical attribute attached to it is ignored.

What is the potential impact?

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Elevation of Privileges

An attacker could potentially exploit conflicting transparency attributes to perform actions with higher privileges than intended.

Data Exposure

If a member with conflicting attributes is involved in handling sensitive data, an attacker could exploit the vulnerability to gain unauthorized access to this data. This could lead to breaches of confidentiality and potential data loss.

How to fix it

Code examples

Noncompliant code example

using System;
using System.Security;

namespace MyLibrary
{
    [SecuritySafeCritical]
    public class Foo
    {
        [SecurityCritical] // Noncompliant
        public void Bar()
        {
        }
    }
}

Compliant solution

using System;
using System.Security;

namespace MyLibrary
{
    public class Foo
    {
        [SecurityCritical]
        public void Bar()
        {
        }
    }
}

How does this work?

Never set class-level annotations

A class should never have class-level annotations if some functions have different permission levels. Instead, make sure every function has its own correct annotation.

If no function needs a particularly distinct security annotation in a class, just set a class-level [SecurityCritical].

Resources

Articles & blog posts

Standards

csharpsquid:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in .NET

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

using System.Security.Cryptography;

public void encrypt()
{
    var simpleDES = new DESCryptoServiceProvider(); // Noncompliant
}

Compliant solution

using System.Security.Cryptography;

public void encrypt()
{
    using (Aes aes = Aes.Create())
    {
        // ...
    }
}

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

csharpsquid:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in Jwt.Net

Code examples

The following code contains an example of JWT decoding without verification of the signature.

Noncompliant code example

using JWT;

public static void decode(IJwtDecoder decoder)
{
    decoder.Decode(token, secret, verify: false); // Noncompliant
}
using JWT;

public static void decode()
{
    var jwt = new JwtBuilder()
        .WithSecret(secret)
        .Decode(token); // Noncompliant
}

Compliant solution

using JWT;

public static void decode(IJwtDecoder decoder)
{
    decoder.Decode(token, secret, verify: true);
}

When using JwtBuilder, make sure to call MustVerifySignature().

using JWT;

public static void decode()
{
    var jwt = new JwtBuilder()
        .WithSecret(secret)
        .MustVerifySignature()
        .Decode(token);
}

How does this work?

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue, follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take when encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

csharpsquid:S4212

This rule is deprecated, and will eventually be removed.

Why is this an issue?

Because serialization constructors allocate and initialize objects, security checks that are present on regular constructors must also be present on a serialization constructor. Failure to do so would allow callers that could not otherwise create an instance to use the serialization constructor to do this.

This rule raises an issue when a type implements the System.Runtime.Serialization.ISerializable interface, is not a delegate or interface, is declared in an assembly that allows partially trusted callers and has a constructor that takes a System.Runtime.Serialization.SerializationInfo object and a System.Runtime.Serialization.StreamingContext object which is not secured by a security check, but one or more of the regular constructors in the type is secured.

Noncompliant code example

using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using System.Security;
using System.Security.Permissions;

[assembly: AllowPartiallyTrustedCallersAttribute()]
namespace MyLibrary
{
    [Serializable]
    public class Foo : ISerializable
    {
        private int n;

        [FileIOPermissionAttribute(SecurityAction.Demand, Unrestricted = true)]
        public Foo()
        {
           n = -1;
        }

        protected Foo(SerializationInfo info, StreamingContext context) // Noncompliant
        {
           n = (int)info.GetValue("n", typeof(int));
        }

        void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
        {
           info.AddValue("n", n);
        }
    }
}

Compliant solution

using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using System.Security;
using System.Security.Permissions;

[assembly: AllowPartiallyTrustedCallersAttribute()]
namespace MyLibrary
{
    [Serializable]
    public class Foo : ISerializable
    {
        private int n;

        [FileIOPermissionAttribute(SecurityAction.Demand, Unrestricted = true)]
        public Foo()
        {
           n = -1;
        }

        [FileIOPermissionAttribute(SecurityAction.Demand, Unrestricted = true)]
        protected Foo(SerializationInfo info, StreamingContext context)
        {
           n = (int)info.GetValue("n", typeof(int));
        }

        void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
        {
           info.AddValue("n", n);
        }
    }
}

Resources

csharpsquid:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

Noncompliant code example

These samples use TLSv1.0 as the default TLS algorithm, which is cryptographically weak.

using System.Net;

public void encrypt()
{
    ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls; // Noncompliant
}
using System.Net.Http;
using System.Security.Authentication;

public void encrypt()
{
    new HttpClientHandler
    {
        SslProtocols = SslProtocols.Tls // Noncompliant
    };
}

Compliant solution

Using System.Net;

public void encrypt()
{
    ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls13;
}
using System.Net.Http;
using System.Security.Authentication;

public void encrypt()
{
    new HttpClientHandler
    {
        SslProtocols = SslProtocols.Tls12
    };
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

csharpsquid:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext.

Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution.

For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in .NET

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

using System.Security.Cryptography;

public void encrypt()
{
    AesManaged aes = new AesManaged
    {
        keysize = 128,
        blocksize = 128,
        mode = ciphermode.ecb,        // Noncompliant
        padding = paddingmode.pkcs7
    };
}

Note that Microsoft has marked derived cryptographic types like AesManaged as no longer recommended for use.

Example with an asymmetric cipher, RSA:

using System.Security.Cryptography;

public void encrypt()
{
    RSACryptoServiceProvider RsaCsp = new RSACryptoServiceProvider();
    byte[] encryptedData            = RsaCsp.Encrypt(dataToEncrypt, false); // Noncompliant
}

Compliant solution

For the AES symmetric cipher, use the GCM mode:

using System.Security.Cryptography;

public void encrypt()
{
    AesGcm aes = AesGcm(key);
}

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

using System.Security.Cryptography;

public void encrypt()
{
    RSACryptoServiceProvider RsaCsp = new RSACryptoServiceProvider();
    byte[] encryptedData            = RsaCsp.Encrypt(dataToEncrypt, true);
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: use authenticated encryption modes

The best-known authenticated encryption mode for AES is Galois/Counter mode (GCM).

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

csharpsquid:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the System.Random class relies on a pseudorandom number generator, it should not be used for security-critical applications or for protecting sensitive data. In such context, the System.Cryptography.RandomNumberGenerator class which relies on a cryptographically strong random number generator (RNG) should be used in place.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Only use random number generators which are recommended by OWASP or any other trusted organization.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

var random = new Random(); // Sensitive use of Random
byte[] data = new byte[16];
random.NextBytes(data);
return BitConverter.ToString(data); // Check if this value is used for hashing or encryption

Compliant Solution

using System.Security.Cryptography;
...
var randomGenerator = RandomNumberGenerator.Create(); // Compliant for security-sensitive use cases
byte[] data = new byte[16];
randomGenerator.GetBytes(data);
return BitConverter.ToString(data);

See

csharpsquid:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

When the HttpCookie.HttpOnly property is set to false then the cookie can be accessed by client side code:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
myCookie.HttpOnly = false; // Sensitive: this cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability

The default value of HttpOnly flag is false, unless overwritten by an application’s configuration file:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
// Sensitive: this cookie is created without the httponly flag  (by default set to false) and so it can be stolen easily in case of XSS vulnerability

Compliant Solution

Set the HttpCookie.HttpOnly property to true:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
myCookie.HttpOnly = true; // Compliant: the sensitive cookie is protected against theft thanks to the HttpOnly property set to true (HttpOnly = true)

Or change the default flag values for the whole application by editing the Web.config configuration file:

<httpCookies httpOnlyCookies="true" requireSSL="true" />
  • the requireSSL attribute corresponds programmatically to the Secure field.
  • the httpOnlyCookies attribute corresponds programmatically to the httpOnly field.

See

csharpsquid:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in .NET

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

using System;
using System.Security.Cryptography;

public void encrypt()
{
    var RsaCsp = new RSACryptoServiceProvider(); // Noncompliant
}

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

using System;
using System.Security.Cryptography;

public void encrypt()
{
    var DsaCsp = new DSACryptoServiceProvider(); // Noncompliant
}

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name:

using System;
using System.Security.Cryptography;

public void encrypt()
{
    ECDsa ecdsa = ECDsa.Create(ECCurve.NamedCurves.brainpoolP160t1); // Noncompliant
}

Compliant solution

using System;
using System.Security.Cryptography;

public void encrypt()
{
    var RsaCsp = new RSACryptoServiceProvider(2048);
}
using System;
using System.Security.Cryptography;

public void encrypt()
{
    var Dsa = new DSACng(2048);
}
using System;
using System.Security.Cryptography;

public void encrypt()
{
    ECDsa ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256);
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms is mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC-based algorithms.

Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:

  • NIST P curves with a size of at least 224 bits, e.g. secp256r1.
  • Curve25519, generally known as ed25519 or x25519 depending on its application.
  • Curve448.
  • Brainpool curves with a size of at least 224 bits, e.g. brainpoolP224r1

Pitfalls

The KeySize Property is not a setter

The following code is invalid:

 ----
     var RsaCsp = new RSACryptoServiceProvider();
     RsaCsp.KeySize = 2048;
----

The KeySize property of CryptoServiceProviders cannot be updated because the setter simply does not exist. This means that this line will not perform any update on KeySize, and the compiler won’t raise an Exception when compiling it. This should not be considered a workaround.
To change the key size, use one of the overloaded constructors with the desired key size instead.

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

csharpsquid:S5753

ASP.NET 1.1+ comes with a feature called Request Validation, preventing the server to accept content containing un-encoded HTML. This feature comes as a first protection layer against Cross-Site Scripting (XSS) attacks and act as a simple Web Application Firewall (WAF) rejecting requests potentially containing malicious content.

While this feature is not a silver bullet to prevent all XSS attacks, it helps to catch basic ones. It will for example prevent <script type="text/javascript" src="https://malicious.domain/payload.js"> to reach your Controller.

Note: Request Validation feature being only available for ASP.NET, no Security Hotspot is raised on ASP.NET Core applications.

Ask Yourself Whether

  • the developer doesn’t know the impact to deactivate the Request Validation feature
  • the web application accepts user-supplied data
  • all user-supplied data are not validated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Activate the Request Validation feature for all HTTP requests

Sensitive Code Example

At Controller level:

[ValidateInput(false)]
public ActionResult Welcome(string name)
{
  ...
}

At application level, configured in the Web.config file:

<configuration>
   <system.web>
      <pages validateRequest="false" />
      ...
      <httpRuntime requestValidationMode="0.0" />
   </system.web>
</configuration>

Compliant Solution

At Controller level:

[ValidateInput(true)]
public ActionResult Welcome(string name)
{
  ...
}

or

public ActionResult Welcome(string name)
{
  ...
}

At application level, configured in the Web.config file:

<configuration>
   <system.web>
      <pages validateRequest="true" />
      ...
      <httpRuntime requestValidationMode="4.5" />
   </system.web>
</configuration>

See

csharpsquid:S5766

Deserialization process extracts data from the serialized representation of an object and reconstruct it directly, without calling constructors. Thus, data validation implemented in constructors can be bypassed if serialized objects are controlled by an attacker.

Ask Yourself Whether

  • The data validation implemented in constructors enforces a relevant security check.
  • Objects instantiated via deserialization don’t run the same security checks as the ones executed when objects are created through constructors.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • At the end of the deserialization process it is recommended to perform the same validation checks as the ones performed in constructors, especially when the serialized object can be controlled by an attacker.

Sensitive Code Example

When a serializable class doesn’t inherit from ISerializable or IDeserializationCallback types and has a constructor using its parameters in conditions:

[Serializable]
public class InternalUrl
{
    private string url;

    public InternalUrl(string tmpUrl) // Sensitive
    {
       if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
       {
          url= "http://localhost/default";
       }
       else
       {
          url= tmpUrl;
       }
    }
}

When a class inherit from ISerializable type, has a regular constructor using its parameters in conditions, but doesn’t perform the same validation after deserialization:

[Serializable]
public class InternalUrl : ISerializable
{
    private string url;

    public InternalUrl(string tmpUrl) // Sensitive
    {
        if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
        {
            url= "http://localhost/default";
        }
        else
        {
            url= tmpUrl;
        }
    }

    // special constructor used during deserialization
    protected InternalUrl(SerializationInfo info, StreamingContext context) // Sensitive
    {
       url= (string) info.GetValue("url", typeof(string));
       // the same validation as seen in the regular constructor is not performed
     }

    void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
    {
        info.AddValue("url", url);
    }
}

When a class inherit from IDeserializationCallback type, has a constructor using its parameters in conditions but the IDeserializationCallback.OnDeserialization method doesn’t perform any conditional checks:

[Serializable]
public class InternalUrl : IDeserializationCallback
{
    private string url;

    public InternalUrl(string tmpUrl) // Sensitive
    {
        if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
        {
            url= "http://localhost/default";
        }
        else
        {
            url= tmpUrl;
        }
    }

    void IDeserializationCallback.OnDeserialization(object sender) // Sensitive
    {
       // the same validation as seen in the constructor is not performed
    }
}

Compliant Solution

When using ISerializable type to control deserialization, perform the same checks inside regular constructors than in the special constructor SerializationInfo info, StreamingContext context used during deserialization:

[Serializable]
public class InternalUrl : ISerializable
{
    private string url;

    public InternalUrl(string tmpUrl)
    {
        if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
        {
            url= "http://localhost/default";
        }
        else
        {
            url= tmpUrl;
        }
    }

    // special constructor used during deserialization
    protected InternalUrl(SerializationInfo info, StreamingContext context)
    {
       string tmpUrl= (string) info.GetValue("url", typeof(string));

       if(!tmpUrl.StartsWith("http://localhost/") { // Compliant
          url= "http://localhost/default";
       }
       else {
          url= tmpUrl;
       }
     }

    void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
    {
        info.AddValue("url", url);
    }
}

When using IDeserializationCallback type to control deserialization, perform the same checks inside regular constructors than after deserialization with IDeserializationCallback.OnDeserialization method:

[Serializable]
public class InternalUrl : IDeserializationCallback
{
    private string url;

    public InternalUrl(string tmpUrl)
    {
       if(!tmpUrl.StartsWith("http://localhost/")) // there is some input validation
       {
          url= "http://localhost/default";
       }
       else
       {
          url= tmpUrl;
       }
    }

    void IDeserializationCallback.OnDeserialization(object sender) // Compliant
    {
        if(!url.StartsWith("http://localhost/"))
        {
            url= "http://localhost/default";
        }
        else
        {
        }
    }
}

See

csharpsquid:S2257

The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Standard algorithms like AES, RSA, SHA, …​ should be used instead.

This rule tracks custom implementation of these types from System.Security.Cryptography namespace:

  • AsymmetricAlgorithm
  • AsymmetricKeyExchangeDeformatter
  • AsymmetricKeyExchangeFormatter
  • AsymmetricSignatureDeformatter
  • AsymmetricSignatureFormatter
  • DeriveBytes
  • HashAlgorithm
  • ICryptoTransform
  • SymmetricAlgorithm

Recommended Secure Coding Practices

  • Use a standard algorithm instead of creating a custom one.

Sensitive Code Example

public class CustomHash : HashAlgorithm // Noncompliant
{
    private byte[] result;

    public override void Initialize() => result = null;
    protected override byte[] HashFinal() => result;

    protected override void HashCore(byte[] array, int ibStart, int cbSize) =>
        result ??= array.Take(8).ToArray();
}

Compliant Solution

SHA256 mySHA256 = SHA256.Create()

See

csharpsquid:S4433

Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:

  • Anonymous Authentication
  • Unauthenticated Authentication
  • Name/Password Authentication

A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials.

Why is this an issue?

When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory.

What is the potential impact?

An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores.

Authentication bypass

If attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider.

In such a case, all users configured in the directory might see their identity and privileges taken over.

Sensitive information leak

If attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information.

Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider.

If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law.

How to fix it

Code examples

The following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism.

Noncompliant code example

DirectoryEntry myDirectoryEntry = new DirectoryEntry(adPath);
myDirectoryEntry.AuthenticationType = AuthenticationTypes.None; // Noncompliant

DirectoryEntry myDirectoryEntry = new DirectoryEntry(adPath, "u", "p", AuthenticationTypes.None); // Noncompliant

Compliant solution

DirectoryEntry myDirectoryEntry = new DirectoryEntry(myADSPath); // Compliant; default DirectoryEntry.AuthenticationType property value is "Secure" since .NET Framework 2.0

DirectoryEntry myDirectoryEntry = new DirectoryEntry(myADSPath, "u", "p", AuthenticationTypes.Secure);

Resources

Documentation

Standards

csharpsquid:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

var hashProvider1 = new MD5CryptoServiceProvider(); // Sensitive
var hashProvider2 = (HashAlgorithm)CryptoConfig.CreateFromName("MD5"); // Sensitive
var hashProvider3 = new SHA1Managed(); // Sensitive
var hashProvider4 = HashAlgorithm.Create("SHA1"); // Sensitive

Compliant Solution

var hashProvider1 = new SHA512Managed(); // Compliant
var hashProvider2 = (HashAlgorithm)CryptoConfig.CreateFromName("SHA512Managed"); // Compliant
var hashProvider3 = HashAlgorithm.Create("SHA512Managed"); // Compliant

See

csharpsquid:S4792

This rule is deprecated, and will eventually be removed.

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

.Net Core: configure programmatically

using System;
using System.Collections;
using System.Collections.Generic;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using Microsoft.AspNetCore;

namespace MvcApp
{
    public class ProgramLogging
    {
        public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .ConfigureLogging((hostingContext, logging) => // Sensitive
                {
                    // ...
                })
                .UseStartup<StartupLogging>();
    }

    public class StartupLogging
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddLogging(logging => // Sensitive
            {
                // ...
            });
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            IConfiguration config = null;
            LogLevel level = LogLevel.Critical;
            Boolean includeScopes = false;
            Func<string,Microsoft.Extensions.Logging.LogLevel,bool> filter = null;
            Microsoft.Extensions.Logging.Console.IConsoleLoggerSettings consoleSettings = null;
            Microsoft.Extensions.Logging.AzureAppServices.AzureAppServicesDiagnosticsSettings azureSettings = null;
            Microsoft.Extensions.Logging.EventLog.EventLogSettings eventLogSettings = null;

            // An issue will be raised for each call to an ILoggerFactory extension methods adding loggers.
            loggerFactory.AddAzureWebAppDiagnostics(); // Sensitive
            loggerFactory.AddAzureWebAppDiagnostics(azureSettings); // Sensitive
            loggerFactory.AddConsole(); // Sensitive
            loggerFactory.AddConsole(level); // Sensitive
            loggerFactory.AddConsole(level, includeScopes); // Sensitive
            loggerFactory.AddConsole(filter); // Sensitive
            loggerFactory.AddConsole(filter, includeScopes); // Sensitive
            loggerFactory.AddConsole(config); // Sensitive
            loggerFactory.AddConsole(consoleSettings); // Sensitive
            loggerFactory.AddDebug(); // Sensitive
            loggerFactory.AddDebug(level); // Sensitive
            loggerFactory.AddDebug(filter); // Sensitive
            loggerFactory.AddEventLog(); // Sensitive
            loggerFactory.AddEventLog(eventLogSettings); // Sensitive
            loggerFactory.AddEventLog(level); // Sensitive
            loggerFactory.AddEventSourceLogger(); // Sensitive

            IEnumerable<ILoggerProvider> providers = null;
            LoggerFilterOptions filterOptions1 = null;
            IOptionsMonitor<LoggerFilterOptions> filterOptions2 = null;

            LoggerFactory factory = new LoggerFactory(); // Sensitive
            new LoggerFactory(providers); // Sensitive
            new LoggerFactory(providers, filterOptions1); // Sensitive
            new LoggerFactory(providers, filterOptions2); // Sensitive
        }
    }
}

Log4Net

using System;
using System.IO;
using System.Xml;
using log4net.Appender;
using log4net.Config;
using log4net.Repository;

namespace Logging
{
    class Log4netLogging
    {
        void Foo(ILoggerRepository repository, XmlElement element, FileInfo configFile, Uri configUri, Stream configStream,
        IAppender appender, params IAppender[] appenders) {
            log4net.Config.XmlConfigurator.Configure(repository); // Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, element); // Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configFile); // Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configUri); // Sensitive
            log4net.Config.XmlConfigurator.Configure(repository, configStream); // Sensitive
            log4net.Config.XmlConfigurator.ConfigureAndWatch(repository, configFile); // Sensitive

            log4net.Config.DOMConfigurator.Configure(); // Sensitive
            log4net.Config.DOMConfigurator.Configure(repository); // Sensitive
            log4net.Config.DOMConfigurator.Configure(element); // Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, element); // Sensitive
            log4net.Config.DOMConfigurator.Configure(configFile); // Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, configFile); // Sensitive
            log4net.Config.DOMConfigurator.Configure(configStream); // Sensitive
            log4net.Config.DOMConfigurator.Configure(repository, configStream); // Sensitive
            log4net.Config.DOMConfigurator.ConfigureAndWatch(configFile); // Sensitive
            log4net.Config.DOMConfigurator.ConfigureAndWatch(repository, configFile); // Sensitive

            log4net.Config.BasicConfigurator.Configure(); // Sensitive
            log4net.Config.BasicConfigurator.Configure(appender); // Sensitive
            log4net.Config.BasicConfigurator.Configure(appenders); // Sensitive
            log4net.Config.BasicConfigurator.Configure(repository); // Sensitive
            log4net.Config.BasicConfigurator.Configure(repository, appender); // Sensitive
            log4net.Config.BasicConfigurator.Configure(repository, appenders); // Sensitive
        }
    }
}

NLog: configure programmatically

namespace Logging
{
    class NLogLogging
    {
        void Foo(NLog.Config.LoggingConfiguration config) {
            NLog.LogManager.Configuration = config; // Sensitive, this changes the logging configuration.
        }
    }
}

Serilog

namespace Logging
{
    class SerilogLogging
    {
        void Foo() {
            new Serilog.LoggerConfiguration(); // Sensitive
        }
    }
}

See

csharpsquid:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in .NET

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

using System.Xml;

public static void decode()
{
    XmlDocument parser = new XmlDocument();
    parser.XmlResolver = new XmlUrlResolver(); // Noncompliant
    parser.LoadXml("xxe.xml");
}

Compliant solution

XmlDocument is safe by default since .NET Framework 4.5.2. For older versions, set XmlResolver explicitly to null.

using System.Xml;

public static void decode()
{
    XmlDocument parser = new XmlDocument();
    parser.XmlResolver = null;
    parser.LoadXml("xxe.xml");
}

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

csharpsquid:S2612

In Unix, "others" class refers to all users except the owner of the file and the members of the group assigned to this file.

In Windows, "Everyone" group is similar and includes all members of the Authenticated Users group as well as the built-in Guest account, and several other built-in security accounts.

Granting permissions to these groups can lead to unintended access to files.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

.Net Framework:

var unsafeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Allow);

var fileSecurity = File.GetAccessControl("path");
fileSecurity.AddAccessRule(unsafeAccessRule); // Sensitive
fileSecurity.SetAccessRule(unsafeAccessRule); // Sensitive
File.SetAccessControl("fileName", fileSecurity);

.Net / .Net Core

var fileInfo = new FileInfo("path");
var fileSecurity = fileInfo.GetAccessControl();

fileSecurity.AddAccessRule(new FileSystemAccessRule("Everyone", FileSystemRights.Write, AccessControlType.Allow)); // Sensitive
fileInfo.SetAccessControl(fileSecurity);

.Net / .Net Core using Mono.Posix.NETStandard

var fileSystemEntry = UnixFileSystemInfo.GetFileSystemEntry("path");
fileSystemEntry.FileAccessPermissions = FileAccessPermissions.OtherReadWriteExecute; // Sensitive

Compliant Solution

.Net Framework

var safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny);

var fileSecurity = File.GetAccessControl("path");
fileSecurity.AddAccessRule(safeAccessRule);
File.SetAccessControl("path", fileSecurity);

.Net / .Net Core

var safeAccessRule = new FileSystemAccessRule("Everyone", FileSystemRights.FullControl, AccessControlType.Deny);

var fileInfo = new FileInfo("path");
var fileSecurity = fileInfo.GetAccessControl();
fileSecurity.SetAccessRule(safeAccessRule);
fileInfo.SetAccessControl(fileSecurity);

.Net / .Net Core using Mono.Posix.NETStandard

var fs = UnixFileSystemInfo.GetFileSystemEntry("path");
fs.FileAccessPermissions = FileAccessPermissions.UserExecute;

See

csharpsquid:S3884

This rule is deprecated, and will eventually be removed.

Why is this an issue?

CoSetProxyBlanket and CoInitializeSecurity both work to set the permissions context in which the process invoked immediately after is executed. Calling them from within that process is useless because it’s too late at that point; the permissions context has already been set.

Specifically, these methods are meant to be called from non-managed code such as a C++ wrapper that then invokes the managed, i.e. C# or VB.NET, code.

Noncompliant code example

[DllImport("ole32.dll")]
static extern int CoSetProxyBlanket([MarshalAs(UnmanagedType.IUnknown)]object pProxy, uint dwAuthnSvc, uint dwAuthzSvc,
	[MarshalAs(UnmanagedType.LPWStr)] string pServerPrincName, uint dwAuthnLevel, uint dwImpLevel, IntPtr pAuthInfo,
	uint dwCapabilities);

public enum RpcAuthnLevel
{
	Default = 0,
	None = 1,
	Connect = 2,
	Call = 3,
	Pkt = 4,
	PktIntegrity = 5,
	PktPrivacy = 6
}

public enum RpcImpLevel
{
	Default = 0,
	Anonymous = 1,
	Identify = 2,
	Impersonate = 3,
	Delegate = 4
}

public enum EoAuthnCap
{
	None = 0x00,
	MutualAuth = 0x01,
	StaticCloaking = 0x20,
	DynamicCloaking = 0x40,
	AnyAuthority = 0x80,
	MakeFullSIC = 0x100,
	Default = 0x800,
	SecureRefs = 0x02,
	AccessControl = 0x04,
	AppID = 0x08,
	Dynamic = 0x10,
	RequireFullSIC = 0x200,
	AutoImpersonate = 0x400,
	NoCustomMarshal = 0x2000,
	DisableAAA = 0x1000
}

[DllImport("ole32.dll")]
public static extern int CoInitializeSecurity(IntPtr pVoid, int cAuthSvc, IntPtr asAuthSvc, IntPtr pReserved1,
	RpcAuthnLevel level, RpcImpLevel impers, IntPtr pAuthList, EoAuthnCap dwCapabilities, IntPtr pReserved3);

static void Main(string[] args)
{
	var hres1 = CoSetProxyBlanket(null, 0, 0, null, 0, 0, IntPtr.Zero, 0); // Noncompliant

	var hres2 = CoInitializeSecurity(IntPtr.Zero, -1, IntPtr.Zero, IntPtr.Zero, RpcAuthnLevel.None,
		RpcImpLevel.Impersonate, IntPtr.Zero, EoAuthnCap.None, IntPtr.Zero); // Noncompliant
}

Resources

csharpsquid:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

var ip = "192.168.12.42";
var address = IPAddress.Parse(ip);

Compliant Solution

var ip = ConfigurationManager.AppSettings["myapplication.ip"];
var address = IPAddress.Parse(ip);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

csharpsquid:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in .NET

Code examples

In the following example, the callback change impacts the entirety of HTTP requests made by the application.

The certificate validation gets disabled by overriding ServerCertificateValidationCallback with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

using System.Net;
using System.Net.Http;

public static void connect()
{
    ServicePointManager.ServerCertificateValidationCallback +=
	 (sender, certificate, chain, errors) => {
	     return true; // Noncompliant
    };

    HttpClient httpClient = new HttpClient();
    HttpResponseMessage response = httpClient.GetAsync("https://example.com").Result;
}

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

csharpsquid:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

using var zipToOpen = new FileStream(@"ZipBomb.zip", FileMode.Open);
using var archive = new ZipArchive(zipToOpen, ZipArchiveMode.Read);
foreach (ZipArchiveEntry entry in archive.Entries)
{
  entry.ExtractToFile("./output_onlyfortesting.txt", true); // Sensitive
}

Compliant Solution

int THRESHOLD_ENTRIES = 10000;
int THRESHOLD_SIZE = 1000000000; // 1 GB
double THRESHOLD_RATIO = 10;
int totalSizeArchive = 0;
int totalEntryArchive = 0;

using var zipToOpen = new FileStream(@"ZipBomb.zip", FileMode.Open);
using var archive = new ZipArchive(zipToOpen, ZipArchiveMode.Read);
foreach (ZipArchiveEntry entry in archive.Entries)
{
  totalEntryArchive ++;

  using (Stream st = entry.Open())
  {
    byte[] buffer = new byte[1024];
    int totalSizeEntry = 0;
    int numBytesRead = 0;

    do
    {
      numBytesRead = st.Read(buffer, 0, 1024);
      totalSizeEntry += numBytesRead;
      totalSizeArchive += numBytesRead;
      double compressionRatio = totalSizeEntry / entry.CompressedLength;

      if(compressionRatio > THRESHOLD_RATIO) {
        // ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack
        break;
      }
    }
    while (numBytesRead > 0);
  }

  if(totalSizeArchive > THRESHOLD_SIZE) {
      // the uncompressed data size is too much for the application resource capacity
      break;
  }

  if(totalEntryArchive > THRESHOLD_ENTRIES) {
      // too much entries in this archive, can lead to inodes exhaustion of the system
      break;
  }
}

See

csharpsquid:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

string username = "admin";
string password = "Admin123"; // Sensitive
string usernamePassword  = "user=admin&password=Admin123"; // Sensitive
string url = "scheme://user:Admin123@domain.com"; // Sensitive

Compliant Solution

string username = "admin";
string password = GetEncryptedPassword();
string usernamePassword = string.Format("user={0}&password={1}", GetEncryptedUsername(), GetEncryptedPassword());
string url = $"scheme://{username}:{password}@domain.com";

string url2 = "http://guest:guest@domain.com"; // Compliant
const string Password_Property = "custom.password"; // Compliant

Exceptions

  • Issue is not raised when URI username and password are the same.
  • Issue is not raised when searched pattern is found in variable name and value.

See

csharpsquid:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

var urlHttp = "http://example.com";                 // Noncompliant
var urlFtp = "ftp://anonymous@example.com";         // Noncompliant
var urlTelnet = "telnet://anonymous@example.com";   // Noncompliant
using var smtp = new SmtpClient("host", 25); // Noncompliant, EnableSsl is not set
using var telnet = new MyTelnet.Client("host", port); // Noncompliant, rule raises Security Hotspot on any member containing "Telnet"

Compliant Solution

var urlHttps = "https://example.com";
var urlSftp = "sftp://anonymous@example.com";
var urlSsh = "ssh://anonymous@example.com";
using var smtp = new SmtpClient("host", 25) { EnableSsl = true };
using var ssh = new MySsh.Client("host", port);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

csharpsquid:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

using Microsoft.AspNetCore.Mvc;

public class MyController : Controller
{
    [HttpPost]
    [DisableRequestSizeLimit] // Sensitive: No size  limit
    [RequestSizeLimit(10485760)] // Sensitive: 10485760 B = 10240 KB = 10 MB is more than the recommended limit of 8MB
    public IActionResult PostRequest(Model model)
    {
    // ...
    }

    [HttpPost]
    [RequestFormLimits(MultipartBodyLengthLimit = 10485760)] // Sensitive: 10485760 B = 10240 KB = 10 MB is more than the recommended limit of 8MB
    public IActionResult MultipartFormRequest(Model model)
    {
    // ...
    }
}

In Web.config:

<configuration>
    <system.web>
        <httpRuntime maxRequestLength="81920" executionTimeout="3600" />
        <!-- Sensitive: maxRequestLength is expressed in KB, so 81920 KB = 80 MB  -->
    </system.web>
    <system.webServer>
        <security>
            <requestFiltering>
                <requestLimits maxAllowedContentLength="83886080" />
                <!-- Sensitive: maxAllowedContentLength is expressed in bytes, so 83886080 B = 81920 KB = 80 MB  -->
            </requestFiltering>
        </security>
    </system.webServer>
</configuration>

Compliant Solution

using Microsoft.AspNetCore.Mvc;

public class MyController : Controller
{
    [HttpPost]
    [RequestSizeLimit(8388608)] // Compliant: 8388608 B = 8192 KB = 8 MB
    public IActionResult PostRequest(Model model)
    {
    // ...
    }

    [HttpPost]
    [RequestFormLimits(MultipartBodyLengthLimit = 8388608)] // Compliant: 8388608 B = 8192 KB = 8 MB
    public IActionResult MultipartFormRequest(Model model)
    {
    // ...
    }
}

In Web.config:

<configuration>
    <system.web>
        <httpRuntime maxRequestLength="8192" executionTimeout="3600" />
        <!-- Compliant: maxRequestLength is expressed in KB, so 8192 KB = 8 MB  -->
    </system.web>
    <system.webServer>
        <security>
            <requestFiltering>
                <requestLimits maxAllowedContentLength="8388608" />
                <!-- Compliant: maxAllowedContentLength is expressed in bytes, so 8388608 B = 8192 KB = 8 MB  -->
            </requestFiltering>
        </security>
    </system.webServer>
</configuration>

See

csharpsquid:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

public void Foo(DbContext context, string query, string param)
{
    string sensitiveQuery = string.Concat(query, param);
    context.Database.ExecuteSqlCommand(sensitiveQuery); // Sensitive
    context.Query<User>().FromSql(sensitiveQuery); // Sensitive

    context.Database.ExecuteSqlCommand($"SELECT * FROM mytable WHERE mycol={value}", param); // Sensitive, the FormattableString is evaluated and converted to RawSqlString
    string query = $"SELECT * FROM mytable WHERE mycol={param}";
    context.Database.ExecuteSqlCommand(query); // Sensitive, the FormattableString has already been evaluated, it won't be converted to a parametrized query.
}

public void Bar(SqlConnection connection, string param)
{
    SqlCommand command;
    string sensitiveQuery = string.Format("INSERT INTO Users (name) VALUES (\"{0}\")", param);
    command = new SqlCommand(sensitiveQuery); // Sensitive

    command.CommandText = sensitiveQuery; // Sensitive

    SqlDataAdapter adapter;
    adapter = new SqlDataAdapter(sensitiveQuery, connection); // Sensitive
}

Compliant Solution

public void Foo(DbContext context, string query, string param)
{
    context.Database.ExecuteSqlCommand("SELECT * FROM mytable WHERE mycol=@p0", param); // Compliant, it's a parametrized safe query
}

See

csharpsquid:S6640

Using unsafe code blocks can lead to unintended security or stability risks.

unsafe code blocks allow developers to use features such as pointers, fixed buffers, function calls through pointers and manual memory management. Such features may be necessary for interoperability with native libraries, as these often require pointers. It may also increase performance in some critical areas, as certain bounds checks are not executed in an unsafe context.

unsafe code blocks aren’t necessarily dangerous, however, the contents of such blocks are not verified by the Common Language Runtime. Therefore, it is up to the programmer to ensure that no bugs are introduced through manual memory management or casting. If not done correctly, then those bugs can lead to memory corruption vulnerabilities such as stack overflows. unsafe code blocks should be used with caution because of these security and stability risks.

Ask Yourself Whether

  • There are any pointers or fixed buffers declared within the unsafe code block.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

Unless absolutely necessary, do not use unsafe code blocks. If unsafe is used to increase performance, then the Span and Memory APIs may serve a similar purpose in a safe context.

If it is not possible to remove the code block, then it should be kept as short as possible. Doing so reduces risk, as there is less code that can potentially introduce new bugs. Within the unsafe code block, make sure that:

  • All type casts are correct.
  • Memory is correctly allocated and then released.
  • Array accesses can never go out of bounds.

Sensitive Code Example

public unsafe int SubarraySum(int[] array, int start, int end)  // Sensitive
{
    var sum = 0;

    // Skip array bound checks for extra performance
    fixed (int* firstNumber = array)
    {
        for (int i = start; i < end; i++)
            sum += *(firstNumber + i);
    }

    return sum;
}

Compliant Solution

public int SubarraySum(int[] array, int start, int end)
{
    var sum = 0;

    Span<int> span = array.AsSpan();
    for (int i = start; i < end; i++)
        sum += span[i];

    return sum;
}

See

csharpsquid:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

Exceptions

To securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:

  • Argon2
  • PBKDF2
  • Scrypt
  • Bcrypt

When they are used for password storage, using a secure, random salt is required.

However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted.

How to fix it in .NET

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

using System.Security.Cryptography;

public static void hash(string password)
{
    var salt = Encoding.UTF8.GetBytes("salty");
    var hashed = new Rfc2898DeriveBytes(password, salt); // Noncompliant
}

Compliant solution

using System.Security.Cryptography;

public static void hash(string password)
{
    var hashed = new Rfc2898DeriveBytes(password, 16);
}

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards.

In the case of the code sample, the class automatically takes care of generating a secure salt if none is specified.

Resources

Standards

csharpsquid:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP, TMPDIR and TEMP.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP
  • %USERPROFILE%\AppData\Local\Temp

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Out of the box, .NET is missing secure-by-design APIs to create temporary files. To overcome this, one of the following options can be used:

  • Use a dedicated sub-folder with tightly controlled permissions
  • Created temporary files in a publicly writable folder and make sure:
    • Generated filename is unpredictable
    • File is readable and writable only by the creating user ID
    • File descriptor is not inherited by child processes
    • File is destroyed as soon as it is closed

Sensitive Code Example

using var writer = new StreamWriter("/tmp/f"); // Sensitive
var tmp = Environment.GetEnvironmentVariable("TMP"); // Sensitive

Compliant Solution

var randomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName());

// Creates a new file with write, non inheritable permissions which is deleted on close.
using var fileStream = new FileStream(randomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose);
using var writer = new StreamWriter(fileStream);

See

csharpsquid:S5445

Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic.

Why is this an issue?

Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it.

In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues.

What is the potential impact?

Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it.

Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise.

Information disclosure

Because attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive.

For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements.

Attack surface extension

An application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise.

For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over.

How to fix it

Code examples

The following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function.

Noncompliant code example

using System.IO;

public void Example()
{
    var tempPath = Path.GetTempFileName();  // Noncompliant

    using (var writer = new StreamWriter(tempPath))
    {
        writer.WriteLine("content");
    }
}

Compliant solution

using System.IO;

public void Example()
{
    var randomPath = Path.Combine(Path.GetTempPath(), Path.GetRandomFileName());

    using (var fileStream = new FileStream(randomPath, FileMode.CreateNew, FileAccess.Write, FileShare.None, 4096, FileOptions.DeleteOnClose))
    using (var writer = new StreamWriter(fileStream))
    {
        writer.WriteLine("content");
    }
}

How does this work?

Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks.

Strong security controls

Temporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose.

In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:

  • Files should be created in a non-public directory.
  • File names should be unique.
  • File names should be unpredictable. They should be generated using a cryptographically secure random generator.
  • File creation should fail if a target file already exists.

Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them.

Here the example compliant code uses the Path.GetTempPath and Path.GetRandomFileName functions to generate a unique random file name. The file is then open with the FileMode.CreateNew option that will ensure the creation fails if the file already exists. The FileShare.None option will additionally prevent the file from being opened again by any process. To finish, this code ensures the file will get destroyed once the application has finished using it with the FileOptions.DeleteOnClose option.

Resources

Documentation

  • OWASP - Insecure Temporary File

Standards

csharpsquid:S6444

Not specifying a timeout for regular expressions can lead to a Denial-of-Service attack. Pass a timeout when using System.Text.RegularExpressions to process untrusted input because a malicious user might craft a value for which the evaluation lasts excessively long.

Ask Yourself Whether

  • the input passed to the regular expression is untrusted.
  • the regular expression contains patterns vulnerable to catastrophic backtracking.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to specify a matchTimeout when executing a regular expression.
  • Make sure regular expressions are not vulnerable to Denial-of-Service attacks by reviewing the patterns.
  • Consider using a non-backtracking algorithm by specifying RegexOptions.NonBacktracking.

Sensitive Code Example

public void RegexPattern(string input)
{
    var emailPattern = new Regex(".+@.+", RegexOptions.None);
    var isNumber = Regex.IsMatch(input, "[0-9]+");
    var isLetterA = Regex.IsMatch(input, "(a+)+");
}

Compliant Solution

public void RegexPattern(string input)
{
    var emailPattern = new Regex(".+@.+", RegexOptions.None, TimeSpan.FromMilliseconds(100));
    var isNumber = Regex.IsMatch(input, "[0-9]+", RegexOptions.None, TimeSpan.FromMilliseconds(100));
    var isLetterA = Regex.IsMatch(input, "(a+)+", RegexOptions.NonBacktracking); // .Net 7 and above
    AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_TIMEOUT", TimeSpan.FromMilliseconds(100)); // process-wide setting
}

See

csharpsquid:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

Process p = new Process();
p.StartInfo.FileName = "binary"; // Sensitive

Compliant Solution

Process p = new Process();
p.StartInfo.FileName = @"C:\Apps\binary.exe"; // Compliant

See

csharpsquid:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

ASP.NET Core MVC:

[HttpGet]
public string Get()
{
    Response.Headers.Add("Access-Control-Allow-Origin", "*"); // Sensitive
    Response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "*"); // Sensitive
}
public void ConfigureServices(IServiceCollection services)
{
    services.AddCors(options =>
    {
        options.AddDefaultPolicy(builder =>
        {
            builder.WithOrigins("*"); // Sensitive
        });

        options.AddPolicy(name: "EnableAllPolicy", builder =>
        {
            builder.WithOrigins("*"); // Sensitive
        });

        options.AddPolicy(name: "OtherPolicy", builder =>
        {
            builder.AllowAnyOrigin(); // Sensitive
        });
    });

    services.AddControllers();
}

ASP.NET MVC:

public class HomeController : ApiController
{
    public HttpResponseMessage Get()
    {
        var response = HttpContext.Current.Response;

        response.Headers.Add("Access-Control-Allow-Origin", "*"); // Sensitive
        response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "*"); // Sensitive
        response.AppendHeader(HeaderNames.AccessControlAllowOrigin, "*"); // Sensitive
    }
}
[EnableCors(origins: "*", headers: "*", methods: "GET")] // Sensitive
public HttpResponseMessage Get() => new HttpResponseMessage()
{
    Content = new StringContent("content")
};

User-controlled origin:

String origin = Request.Headers["Origin"];
Response.Headers.Add("Access-Control-Allow-Origin", origin); // Sensitive

Compliant Solution

ASP.NET Core MVC:

[HttpGet]
public string Get()
{
    Response.Headers.Add("Access-Control-Allow-Origin", "https://trustedwebsite.com"); // Safe
    Response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "https://trustedwebsite.com"); // Safe
}
public void ConfigureServices(IServiceCollection services)
{
    services.AddCors(options =>
    {
        options.AddDefaultPolicy(builder =>
        {
            builder.WithOrigins("https://trustedwebsite.com", "https://anothertrustedwebsite.com"); // Safe
        });

        options.AddPolicy(name: "EnableAllPolicy", builder =>
        {
            builder.WithOrigins("https://trustedwebsite.com"); // Safe
        });
    });

    services.AddControllers();
}

ASP.Net MVC:

public class HomeController : ApiController
{
    public HttpResponseMessage Get()
    {
        var response = HttpContext.Current.Response;

        response.Headers.Add("Access-Control-Allow-Origin", "https://trustedwebsite.com");
        response.Headers.Add(HeaderNames.AccessControlAllowOrigin, "https://trustedwebsite.com");
        response.AppendHeader(HeaderNames.AccessControlAllowOrigin, "https://trustedwebsite.com");
    }
}
[EnableCors(origins: "https://trustedwebsite.com", headers: "*", methods: "GET")]
public HttpResponseMessage Get() => new HttpResponseMessage()
{
    Content = new StringContent("content")
};

User-controlled origin validated with an allow-list:

String origin = Request.Headers["Origin"];

if (trustedOrigins.Contains(origin))
{
    Response.Headers.Add("Access-Control-Allow-Origin", origin);
}

See

csharpsquid:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

When the HttpCookie.Secure property is set to false then the cookie will be send during an unencrypted HTTP request:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
myCookie.Secure = false; //  Sensitive: a security-sensitive cookie is created with the secure flag set to false

The default value of Secure flag is false, unless overwritten by an application’s configuration file:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
//  Sensitive: a security-sensitive cookie is created with the secure flag not defined (by default set to false)

Compliant Solution

Set the HttpCookie.Secure property to true:

HttpCookie myCookie = new HttpCookie("Sensitive cookie");
myCookie.Secure = true; // Compliant

Or change the default flag values for the whole application by editing the Web.config configuration file:

<httpCookies httpOnlyCookies="true" requireSSL="true" />
  • the requireSSL attribute corresponds programmatically to the Secure field.
  • the httpOnlyCookies attribute corresponds programmatically to the httpOnly field.

See

xml:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

Spring-social-twitter secrets can be stored inside a xml file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

    <bean id="connectionFactoryLocator" class="org.springframework.social.connect.support.ConnectionFactoryRegistry">
      <property name="connectionFactories">
          <list>
              <bean class="org.springframework.social.twitter.connect.TwitterConnectionFactory">
                  <constructor-arg value="username" />
                  <constructor-arg value="very-secret-password" />   <!-- Sensitive -->
              </bean>
          </list>
      </property>
  </bean>
</beans>

Compliant Solution

In spring social twitter, retrieve secrets from environment variables:

@Configuration
public class SocialConfig implements SocialConfigurer {

    @Override
    public void addConnectionFactories(ConnectionFactoryConfigurer cfConfig, Environment env) {
        cfConfig.addConnectionFactory(new TwitterConnectionFactory(
            env.getProperty("twitter.consumerKey"),
            env.getProperty("twitter.consumerSecret")));  <!-- Compliant -->
    }
}

See

xml:S3355

Why is this an issue?

Every filter defined in web.xml file should be used in a <filter-mapping> element. Otherwise such filters are not invoked.

Noncompliant code example

  <filter>
     <filter-name>DefinedNotUsed</filter-name>
     <filter-class>com.myco.servlet.ValidationFilter</filter-class>
  </filter>

Compliant solution

  <filter>
     <filter-name>ValidationFilter</filter-name>
     <filter-class>com.myco.servlet.ValidationFilter</filter-class>
  </filter>

  <filter-mapping>
     <filter-name>ValidationFilter</filter-name>
     <url-pattern>/*</url-pattern>
  </filter-mapping>

Resources

xml:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

<application
    android:usesCleartextTraffic="true"> <!-- Sensitive -->
</application>

For versions older than Android 9 (API level 28) android:usesCleartextTraffic is implicitely set to true.

<application> <!-- Sensitive -->
</application>

Compliant Solution

<application
    android:usesCleartextTraffic="false">
</application>

See

xml:S2647

Why is this an issue?

Basic authentication’s only means of obfuscation is Base64 encoding. Since Base64 encoding is easily recognized and reversed, it offers only the thinnest veil of protection to your users, and should not be used.

Noncompliant code example

// in web.xml
<web-app  ...>
  <!--  ...  -->
  <login-config>
    <auth-method>BASIC</auth-method>
  </login-config>
</web-app>

Exceptions

The rule will not raise any issue if HTTPS is enabled, on any URL-pattern.

<web-app  ...>
  <!--  ...  -->
  <security-constraint>
    <web-resource-collection>
      <web-resource-name>HTTPS enabled</web-resource-name>
      <url-pattern>/*</url-pattern>
    </web-resource-collection>
    <user-data-constraint>
      <transport-guarantee>CONFIDENTIAL</transport-guarantee>
    </user-data-constraint>
  </security-constraint>
</web-app>

Resources

xml:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

<session-config>
 <cookie-config>
  <http-only>false</http-only> <!-- Sensitive -->
 </cookie-config>
</session-config>

<session-config>
 <cookie-config> <!-- Sensitive: http-only tag is missing defaulting to false -->
 </cookie-config>
</session-config>

Compliant Solution

<session-config>
 <cookie-config>
  <http-only>true</http-only> <!-- Compliant -->
 </cookie-config>
</session-config>

See

xml:S3374

Why is this an issue?

According to the Common Weakness Enumeration,

If two validation forms have the same name, the Struts Validator arbitrarily chooses one of the forms to use for input validation and discards the other. This decision might not correspond to the programmer’s expectations…​

In such a case, it is likely that the two forms should be combined. At the very least, one should be removed.

Noncompliant code example

<form-validation>
  <formset>
    <form name="BookForm"> ... </form>
    <form name="BookForm"> ... </form>  <!-- Noncompliant -->
  </formset>
</form-validation>

Compliant solution

<form-validation>
  <formset>
    <form name="BookForm"> ... </form>
  </formset>
</form-validation>

Resources

xml:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production.

Activating a development feature in production can have an important range of consequences depending on its use:

  • Technical information leak; generally by disclosing verbose logging information to the application’s user.
  • Arbitrary code execution; generally with a parameter that will allow the remote debugging or profiling of the application.

In all cases, the attack surface of an affected application is increased. In some cases, such features can also make the exploitation of other unrelated vulnerabilities easier.

Ask Yourself Whether

  • The development of the app is completed and the development feature is activated.
  • The app is distributed to end users with the `development feature activated

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Applications should be released without any development feature activated. When such features are required when in the development process of the application, they should only apply to a build variant that is dedicated to development environments. That variant should not be set as the default build configuration to prevent any unattended development feature exposition.

Sensitive Code Example

In AndroidManifest.xml the android debuggable property is set to true. The application will therefore be debuggable.

<application
  android:icon="@mipmap/ic_launcher"
  android:label="@string/app_name"
  android:roundIcon="@mipmap/ic_launcher_round"
  android:supportsRtl="true"
  android:debuggable="true"
  android:theme="@style/AppTheme">
</application>  <!-- Sensitive -->

In a web.config file, the customErrors element’s mode attribute is set to Off. The application will disclose unnecessarily verbose information to its users upon error.

<configuration>
  <system.web>
    <customErrors mode="Off" /> <!-- Sensitive -->
  </system.web>
</configuration>

Compliant Solution

In AndroidManifest.xml the android debuggable property is set to false:

<application
  android:icon="@mipmap/ic_launcher"
  android:label="@string/app_name"
  android:roundIcon="@mipmap/ic_launcher_round"
  android:supportsRtl="true"
  android:debuggable="false"
  android:theme="@style/AppTheme">
</application> <!-- Compliant -->

In a web.config file, the customErrors element’s mode attribute is set to On:

<configuration>
  <system.web>
    <customErrors mode="On" /> <!-- Compliant -->
  </system.web>
</configuration>

See

xml:S5322

Android applications can receive broadcasts from the system or other applications. Receiving intents is security-sensitive. For example, it has led in the past to the following vulnerabilities:

Receivers can be declared in the manifest or in the code to make them context-specific. If the receiver is declared in the manifest Android will start the application if it is not already running once a matching broadcast is received. The receiver is an entry point into the application.

Other applications can send potentially malicious broadcasts, so it is important to consider broadcasts as untrusted and to limit the applications that can send broadcasts to the receiver.

Permissions can be specified to restrict broadcasts to authorized applications. Restrictions can be enforced by both the sender and receiver of a broadcast. If permissions are specified when registering a broadcast receiver, then only broadcasters who were granted this permission can send a message to the receiver.

This rule raises an issue when a receiver is registered without specifying any broadcast permission.

Ask Yourself Whether

  • The data extracted from intents is not sanitized.
  • Intents broadcast is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See the Android documentation for more information.

Sensitive Code Example

<receiver android:name=".MyBroadcastReceiver" android:exported="true">  <!-- Sensitive -->
    <intent-filter>
        <action android:name="android.intent.action.AIRPLANE_MODE"/>
    </intent-filter>
</receiver>

Compliant Solution

Enforce permissions:

<receiver android:name=".MyBroadcastReceiver"
    android:permission="android.permission.SEND_SMS"
    android:exported="true">
    <intent-filter>
        <action android:name="android.intent.action.AIRPLANE_MODE"/>
    </intent-filter>
</receiver>

Do not export the receiver and only receive system intents:

<receiver android:name=".MyBroadcastReceiver" android:exported="false">
    <intent-filter>
        <action android:name="android.intent.action.AIRPLANE_MODE"/>
    </intent-filter>
</receiver>

See

xml:S5594

Why is this an issue?

Once an Android component has been exported, it can be used by attackers to launch malicious actions and might also give access to other components that are not exported.

As a result, sensitive user data can be stolen, and components can be launched unexpectedly.

For this reason, the following components should be protected:

  • Providers
  • Activities
  • Activity-aliases
  • Services

To do so, it is recommended to either set exported to false, add android:readPermission and android:writePermission attributes, or add a <permission> tag.

Warning: When targeting Android versions lower than 12, the presence of intent filters will cause exported to be set to true by default.

If a component must be exported, use a <permission> tag and the protection level that matches your use case and data confidentiality requirements.
For example, Sync adapters should use a signature protection level to remain both exported and protected.

Noncompliant code example

The following components are vulnerable because permissions are undefined or partially defined:

<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:exported="true"
  android:readPermission="com.example.app.READ_PERMISSION" />  <!-- Noncompliant: write permission is not defined -->
<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:exported="true"
  android:writePermission="com.example.app.WRITE_PERMISSION" />  <!-- Noncompliant: read permission is not defined -->
<activity android:name="com.example.activity.Activity">  <!-- Noncompliant: permissions are not defined -->
  <intent-filter>
    <action android:name="com.example.OPEN_UI"/>
    <category android:name="android.intent.category.DEFAULT"/>
  </intent-filter>
</activity>

Compliant solution

If the component’s capabilities or data are not intended to be shared with other apps, its exported attribute should be set to false:

<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:exported="false" />

Otherwise, implement permissions:

<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:exported="true"
  android:readPermission="com.example.app.READ_PERMISSION"
  android:writePermission="com.example.app.WRITE_PERMISSION" />

<activity android:name="com.example.activity.Activity"
          android:permission="com.example.app.PERMISSION" >
  <intent-filter>
    <action android:name="com.example.OPEN_UI"/>
    <category android:name="android.intent.category.DEFAULT" />
  </intent-filter>
</activity>

Resources

xml:S5604

Permissions that can have a large impact on user privacy, marked as dangerous or "not for use by third-party applications" by Android, should be requested only if they are really necessary to implement critical features of an application.

Ask Yourself Whether

  • It is not sure that dangerous permissions requested by the application are really necessary.
  • The users are not clearly informed why and when dangerous permissions are requested by the application.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to carefully review all the permissions and to use dangerous ones only if they are really necessary.

Sensitive Code Example

In AndroidManifest.xml:

<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <!-- Sensitive -->
<uses-permission android:name="android.permission.ACCESS_MEDIA_LOCATION" /> <!-- Sensitive -->

Compliant Solution

<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <!-- Compliant -->

See

xml:S6358

Android has a built-in backup mechanism that can save and restore application data. When application backup is enabled, local data from your application can be exported to Google Cloud or to an external device via adb backup. Enabling Android backup exposes your application to disclosure of sensitive data. It can also lead to corruption of local data when restoration is performed from an untrusted source.

By default application backup is enabled and it includes:

Ask Yourself Whether

  • Application backup is enabled and sensitive data is stored in local files, local databases, or shared preferences.
  • Your application never validates data from files that are included in backups.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Disable application backup unless it is required for your application to work properly.
  • Narrow the scope of backed-up files by using either
    • backup rules (see android:fullBackupContent attribute).
    • a custom BackupAgent.
    • the dedicated no_backup folder (see android.content.Context#getNoBackupFilesDir()).
  • Do not back up local data containing sensitive information unless they are properly encrypted.
  • Make sure that the keys used to encrypt backup data are not included in the backup.
  • Validate data from backed-up files. They should be considered untrusted as they could have been restored from an untrusted source.

Sensitive Code Example

<application
    android:allowBackup="true"> <!-- Sensitive -->
</application>

Compliant Solution

Disable application backup.

<application
    android:allowBackup="false">
</application>

If targeting Android 6.0 or above (API level 23), define files to include/exclude from the application backup.

<application
    android:allowBackup="true"
    android:fullBackupContent="@xml/backup.xml">
</application>

See

xml:S6359

Why is this an issue?

Defining a custom permission in the android.permission namespace may result in an unexpected permission assignment if a newer version of Android adds a permission with the same name. It is recommended to use a namespace specific to the application for custom permissions.

Noncompliant code example

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.organization.app">

    <permission
        android:name="android.permission.MYPERMISSION" /> <!-- Noncompliant -->

</manifest>

Compliant solution

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.organization.app">

    <permission
        android:name="com.organization.app.permission.MYPERMISSION" />

</manifest>

Resources

xml:S6361

android:permission is used to set a single permission for both reading and writing data from a content provider. In regard to the Principle of Least Privilege, client applications that consume the content provider should only have the necessary privileges to complete their tasks. As android:permission grants both read and write access, it prevents client applications from applying this principle. In practice, it means client applications that require read-only access will have to ask for more privileges than what they need: the content provider will always grant read and write together.

Ask Yourself Whether

  • Some client applications consuming the content provider may only require read permission.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

  • Avoid using android:permission attribute alone. Instead android:readPermission and android:writePermission attributes to define separate read and write permissions.
  • Avoid using the same permission for android:readPermission and android:writePermission attributes.

Sensitive Code Example

<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:permission="com.example.app.PERMISSION"  <!-- Sensitive -->
  android:exported="true"/>
<provider
  android:authorities="com.example.app.Provider"
  android:name="com.example.app.Provider"
  android:readPermission="com.example.app.PERMISSION"  <!-- Sensitive -->
  android:writePermission="com.example.app.PERMISSION" <!-- Sensitive -->
  android:exported="true"/>

Compliant Solution

<provider
  android:authorities="com.example.app.MyProvider"
  android:name="com.example.app.MyProvider"
  android:readPermission="com.example.app.READ_PERMISSION"
  android:writePermission="com.example.app.WRITE_PERMISSION"
  android:exported="true"/>

See

xml:S3281

Why is this an issue?

Default interceptors, such as application security interceptors, must be listed in the ejb-jar.xml file, or they will not be treated as default.

This rule applies to projects that contain JEE Beans (any one of javax.ejb.Singleton, MessageDriven, Stateless or Stateful).

Noncompliant code example

// file: ejb-interceptors.xml
<assembly-descriptor>
 <interceptor-binding> <!-- should be declared in ejb-jar.xml -->
      <ejb-name>*</ejb-name>
      <interceptor-class>com.myco.ImportantInterceptor</interceptor-class> <!-- Noncompliant; will NOT be treated as default -->
   </interceptor-binding>
</assembly-descriptor>

Compliant solution

// file: ejb-jar.xml
<assembly-descriptor>
 <interceptor-binding>
      <ejb-name>*</ejb-name>
      <interceptor-class>com.myco.ImportantInterceptor</interceptor-class>
   </interceptor-binding>
</assembly-descriptor>

Resources

xml:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

<!-- Tomcat 7+ Cors Filter -->
<filter>
  <filter-name>CorsFilter</filter-name>
  <filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
  <init-param>
    <param-name>cors.allowed.origins</param-name>
    <param-value>*</param-value> <!-- Sensitive -->
  </init-param>
</filter>

Compliant Solution

<!-- Tomcat 7+ Cors Filter -->
<filter>
  <filter-name>CorsFilter</filter-name>
  <filter-class>org.apache.catalina.filters.CorsFilter</filter-class>
  <init-param>
    <param-name>cors.allowed.origins</param-name>
    <param-value>https://trusted1.org,https://trusted2.org</param-value> <!-- Compliant -->
  </init-param>
</filter>

See

flex:S1465

Why is this an issue?

A LocalConnection object is used to invoke a method in another LocalConnection object, either within a single SWF file or between multiple SWF files. This kind of local connection should be authorized only when the origin (domain) of the other Flex applications is perfectly defined.

Noncompliant code example

localConnection.allowDomain("*");

Compliant solution

localConnection.allowDomain("www.myDomain.com");
flex:S1466

Why is this an issue?

The Security.exactSettings value should remain set at the default value of true. Setting this value to false could make the SWF vulnerable to cross-domain attacks.

Noncompliant code example

Security.exactSettings = false;

Compliant solution

Security.exactSettings = true;
flex:S1468

Why is this an issue?

Calling Security.allowDomain("*") lets any domain cross-script into the domain of this SWF and exercise its functionality.

Noncompliant code example

Security.allowDomain("*");

Compliant solution

Security.allowDomain("www.myDomain.com");
flex:S1951

This rule is deprecated; use S4507 instead.

Why is this an issue?

The trace() function outputs debug statements, which can be read by anyone with a debug version of the Flash player. Because sensitive information could easily be exposed in this manner, trace() should never appear in production code.

Noncompliant code example

    var val:Number = doCalculation();
    trace("Calculation result: " + val);  // Noncompliant

Compliant solution

    var val:Number = doCalculation();

Resources

flex:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

if (unexpectedCondition)
{
  Alert.show("Unexpected Condition"); // Sensitive
}

The trace() function outputs debug statements, which can be read by anyone with a debug version of the Flash player:

var val:Number = doCalculation();
trace("Calculation result: " + val);  // Sensitive

See

flex:S1442

This rule is deprecated; use S4507 instead.

Why is this an issue?

Alert.show(...) can be useful for debugging during development, but in production mode this kind of pop-up could expose sensitive information to attackers, and should never be displayed.

Noncompliant code example

if (unexpectedCondition)
{
  Alert.show("Unexpected Condition");
}

Resources

java:S5852

Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars).

This rule determines the runtime complexity of a regular expression and informs you of the complexity if it is not linear.

Note that, due to improvements to the matching algorithm, some cases of exponential runtime complexity have become impossible when run using JDK 9 or later. In such cases, an issue will only be reported if the project’s target Java version is 8 or earlier.

Ask Yourself Whether

  • The input is user-controlled.
  • The input size is not restricted to a small number of characters.
  • There is no timeout in place to limit the regex evaluation time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression.

In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen. Note that when performing a full match (e.g. using String.matches), the end of the regex counts as a pattern that can fail because it will only succeed when the end of the string is reached.

  • If you have a non-possessive repetition r* or r*?, such that the regex r could produce different possible matches (of possibly different lengths) on the same input, the worst case matching time can be exponential. This can be the case if r contains optional parts, alternations or additional repetitions (but not if the repetition is written in such a way that there’s only one way to match it).
    • When using JDK 9 or later an optimization applies when the repetition is greedy and the entire regex does not contain any back references. In that case the runtime will only be polynomial (in case of nested repetitions) or even linear (in case of alternations or optional parts).
  • If you have multiple non-possessive repetitions that can match the same contents and are consecutive or are only separated by an optional separator or a separator that can be matched by both of the repetitions, the worst case matching time can be polynomial (O(n^c) where c is the number of problematic repetitions). For example a*b* is not a problem because a* and b* match different things and a*_a* is not a problem because the repetitions are separated by a '_' and can’t match that '_'. However, a*a* and .*_.* have quadratic runtime.
  • If you’re performing a partial match (such as by using Matcher.find, String.split, String.replaceAll etc.) and the regex is not anchored to the beginning of the string, quadratic runtime is especially hard to avoid because whenever a match fails, the regex engine will try again starting at the next index. This means that any unbounded repetition (even a possessive one), if it’s followed by a pattern that can fail, can cause quadratic runtime on some inputs. For example str.split("\\s*,") will run in quadratic time on strings that consist entirely of spaces (or at least contain large sequences of spaces, not followed by a comma).

In order to rewrite your regular expression without these patterns, consider the following strategies:

  • If applicable, define a maximum number of expected repetitions using the bounded quantifiers, like {1,5} instead of + for instance.
  • Refactor nested quantifiers to limit the number of way the inner group can be matched by the outer quantifier, for instance this nested quantifier situation (ba+)+ doesn’t cause performance issues, indeed, the inner group can be matched only if there exists exactly one b char per repetition of the group.
  • Optimize regular expressions with possessive quantifiers and atomic grouping.
  • Use negated character classes instead of . to exclude separators where applicable. For example the quadratic regex .*_.* can be made linear by changing it to [^_]*_.*

Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when using partial matches, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:

  • Solve the problem without regular expressions
  • Use an alternative non-backtracking regex implementations such as Google’s RE2 or RE2/J.
  • Use multiple passes. This could mean pre- and/or post-processing the string manually before/after applying the regular expression to it or using multiple regular expressions. One example of this would be to replace str.split("\\s*,\\s*") with str.split(",") and then trimming the spaces from the strings as a second step.
  • When using Matcher.find(), it is often possible to make the regex infallible by making all the parts that could fail optional, which will prevent backtracking. Of course this means that you’ll accept more strings than intended, but this can be handled by using capturing groups to check whether the optional parts were matched or not and then ignoring the match if they weren’t. For example the regex x*y could be replaced with x*(y)? and then the call to matcher.find() could be replaced with matcher.find() && matcher.group(1) != null.

Sensitive Code Example

The first regex evaluation will never end in JDK <= 9 and the second regex evaluation will never end in any versions of the JDK:

java.util.regex.Pattern.compile("(a+)+").matcher(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!").matches(); // Sensitive

java.util.regex.Pattern.compile("(h|h|ih(((i|a|c|c|a|i|i|j|b|a|i|b|a|a|j))+h)ahbfhba|c|i)*").matcher(
"hchcchicihcchciiicichhcichcihcchiihichiciiiihhcchi"+
"cchhcihchcihiihciichhccciccichcichiihcchcihhicchcciicchcccihiiihhihihihi"+
"chicihhcciccchihhhcchichchciihiicihciihcccciciccicciiiiiiiiicihhhiiiihchccch"+
"chhhhiiihchihcccchhhiiiiiiiicicichicihcciciihichhhhchihciiihhiccccccciciihh"+
"ichiccchhicchicihihccichicciihcichccihhiciccccccccichhhhihihhcchchihih"+
"iihhihihihicichihiiiihhhhihhhchhichiicihhiiiiihchccccchichci").matches(); // Sensitive

Compliant Solution

Possessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues:

java.util.regex.Pattern.compile("(a+)++").matcher(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!").matches(); // Compliant

java.util.regex.Pattern.compile("(h|h|ih(((i|a|c|c|a|i|i|j|b|a|i|b|a|a|j))+h)ahbfhba|c|i)*+").matcher(
"hchcchicihcchciiicichhcichcihcchiihichiciiiihhcchi"+
"cchhcihchcihiihciichhccciccichcichiihcchcihhicchcciicchcccihiiihhihihihi"+
"chicihhcciccchihhhcchichchciihiicihciihcccciciccicciiiiiiiiicihhhiiiihchccch"+
"chhhhiiihchihcccchhhiiiiiiiicicichicihcciciihichhhhchihciiihhiccccccciciihh"+
"ichiccchhicchicihihccichicciihcichccihhiciccccccccichhhhihihhcchchihih"+
"iihhihihihicichihiiiihhhhihhhchhichiicihhiiiiihchccccchichci").matches(); // Compliant

See

java:S2115

When accessing a database, an empty password should be avoided as it introduces a weakness.

Why is this an issue?

When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials.

What is the potential impact?

Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains.

Unauthorized Access to Sensitive Data

When a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage.

Compromise of System Integrity

Without a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks.

Unwanted Modifications or Deletions

The absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences.

Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm.

How to fix it in Java SE

Code examples

The following code uses an empty password to connect to a Postgres database.

The vulnerability can be fixed by using a strong password retrieved from Properties. This database.password property is set during deployment. Its value should be strong and different for each database.

Noncompliant code example

Connection conn = DriverManager.getConnection("jdbc:derby:memory:myDB;create=true", "login", ""); // Noncompliant

Compliant solution

String password = System.getProperty("database.password");
Connection conn = DriverManager.getConnection("jdbc:derby:memory:myDB;create=true", "login", password);

Pitfalls

Hard-coded passwords

It could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:

  1. Security Vulnerability: Hard-coded passwords can be easily discovered by anyone who has access to the code, such as other developers or attackers. This can lead to unauthorized access to the database and potential data breaches.
  2. Lack of Flexibility: Hard-coded passwords make it difficult to change the password without modifying the code. If the password needs to be updated, it would require recompiling and redeploying the code, which can be time-consuming and error-prone.
  3. Version Control Issues: Storing passwords in code can lead to version control issues. If the code is shared or stored in a version control system, the password will be visible to anyone with access to the repository, which is a security risk.

To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase.

Resources

Standards

java:S3329

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In the mode Cipher Block Chaining (CBC), each block is used as cryptographic input for the next block. For this reason, the first block requires an initialization vector (IV), also called a "starting variable" (SV).

If the same IV is used for multiple encryption sessions or messages, each new encryption of the same plaintext input would always produce the same ciphertext output. This may allow an attacker to detect patterns in the ciphertext.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, a company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptography Extension

Code examples

Noncompliant code example

import java.nio.charset.StandardCharsets;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidKeyException;
import java.security.InvalidAlgorithmParameterException;
import javax.crypto.Cipher;
import javax.crypto.spec.GCMParameterSpec;
import javax.crypto.spec.SecretKeySpec;
import javax.crypto.NoSuchPaddingException;

public void encrypt(String key, String plainText) {

    byte[] RandomBytes = "7cVgr5cbdCZVw5WY".getBytes(StandardCharsets.UTF_8);

    GCMParameterSpec iv   = new GCMParameterSpec(128, RandomBytes);
    SecretKeySpec keySpec = new SecretKeySpec(key.getBytes(StandardCharsets.UTF_8), "AES");

    try {
        Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding");
        cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv); // Noncompliant

    } catch(NoSuchAlgorithmException|InvalidKeyException|
            NoSuchPaddingException|InvalidAlgorithmParameterException e) {
        // ...
    }
}

Compliant solution

In this example, the code explicitly uses a number generator that is considered strong.

import java.nio.charset.StandardCharsets;
import java.security.SecureRandom;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidKeyException;
import java.security.InvalidAlgorithmParameterException;
import javax.crypto.Cipher;
import javax.crypto.spec.GCMParameterSpec;
import javax.crypto.spec.SecretKeySpec;
import javax.crypto.NoSuchPaddingException;

public void encrypt(String key, String plainText) {

    SecureRandom random = new SecureRandom();
    byte[] randomBytes  = new byte[16];
    random.nextBytes(randomBytes);

    GCMParameterSpec iv   = new GCMParameterSpec(128, randomBytes);
    SecretKeySpec keySpec = new SecretKeySpec(key.getBytes(StandardCharsets.UTF_8), "AES");

    try {
        Cipher cipher = Cipher.getInstance("AES/CBC/NoPadding");
        cipher.init(Cipher.ENCRYPT_MODE, keySpec, iv);

    } catch(NoSuchAlgorithmException|InvalidKeyException|
            NoSuchPaddingException|InvalidAlgorithmParameterException e) {
        // ...
    }
}

How does this work?

Use unique IVs

To ensure high security, initialization vectors must meet two important criteria:

  • IVs must be unique for each encryption operation.
  • For CBC and CFB modes, a secure FIPS-compliant random number generator should be used to generate unpredictable IVs.

The IV does not need be secret, so the IV or information sufficient to determine the IV may be transmitted along with the ciphertext.

In the previous non-compliant example, the problem is not that the IV is hard-coded.
It is that the same IV is used for multiple encryption attempts.

Resources

Standards

java:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

Spring Security provides by default a protection against CSRF attacks which can be disabled:

@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {

  @Override
  protected void configure(HttpSecurity http) throws Exception {
    http.csrf().disable(); // Sensitive: csrf protection is entirely disabled
   // or
    http.csrf().ignoringAntMatchers("/route/"); // Sensitive: csrf protection is disabled for specific routes
  }
}

Compliant Solution

Spring Security CSRF protection is enabled by default, do not disable it:

@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {

  @Override
  protected void configure(HttpSecurity http) throws Exception {
    // http.csrf().disable(); // Compliant
  }
}

See

java:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

Throwable.printStackTrace(...) prints a Throwable and its stack trace to System.Err (by default) which is not easily parseable and can expose sensitive information:

try {
  /* ... */
} catch(Exception e) {
  e.printStackTrace(); // Sensitive
}

EnableWebSecurity annotation for SpringFramework with debug to true enables debugging support:

import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;

@Configuration
@EnableWebSecurity(debug = true) // Sensitive
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
  // ...
}

WebView.setWebContentsDebuggingEnabled(true) for Android enables debugging support:

import android.webkit.WebView;

WebView.setWebContentsDebuggingEnabled(true); // Sensitive
WebView.getFactory().getStatics().setWebContentsDebuggingEnabled(true); // Sensitive

Compliant Solution

Loggers should be used (instead of printStackTrace) to print throwables:

try {
  /* ... */
} catch(Exception e) {
  LOGGER.log("context", e);
}

EnableWebSecurity annotation for SpringFramework with debug to false disables debugging support:

import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;

@Configuration
@EnableWebSecurity(debug = false)
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
  // ...
}

WebView.setWebContentsDebuggingEnabled(false) for Android disables debugging support:

import android.webkit.WebView;

WebView.setWebContentsDebuggingEnabled(false);
WebView.getFactory().getStatics().setWebContentsDebuggingEnabled(false);

See

java:S4512

Setting JavaBean properties is security sensitive. Doing it with untrusted values has led in the past to the following vulnerability:

JavaBeans can have their properties or nested properties set by population functions. An attacker can leverage this feature to push into the JavaBean malicious data that can compromise the software integrity. A typical attack will try to manipulate the ClassLoader and finally execute malicious code.

This rule raises an issue when:

  • BeanUtils.populate(…​) or BeanUtilsBean.populate(…​) from Apache Commons BeanUtils are called
  • BeanUtils.setProperty(…​) or BeanUtilsBean.setProperty(…​) from Apache Commons BeanUtils are called
  • org.springframework.beans.BeanWrapper.setPropertyValue(…​) or org.springframework.beans.BeanWrapper.setPropertyValues(…​) from Spring is called

Ask Yourself Whether

  • the new property values might have been tampered with or provided by an untrusted source.
  • sensitive properties can be modified, for example: class.classLoader

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sanitize all values used as JavaBean properties.

Don’t set any sensitive properties. Keep full control over which properties are set. If the property names are provided by an unstrusted source, filter them with a whitelist.

Sensitive Code Example

Company bean = new Company();
HashMap map = new HashMap();
Enumeration names = request.getParameterNames();
while (names.hasMoreElements()) {
    String name = (String) names.nextElement();
    map.put(name, request.getParameterValues(name));
}
BeanUtils.populate(bean, map); // Sensitive: "map" is populated with data coming from user input, here "request.getParameterNames()"

See

java:S4684

With Spring, when a request mapping method is configured to accept bean objects as arguments, the framework will automatically bind HTTP parameters to those objects' properties. If the targeted beans are also persistent entities, the framework will also store those properties in the storage backend, usually the application’s database.

Why is this an issue?

By accepting persistent entities as method arguments, the application allows clients to manipulate the object’s properties directly.

What is the potential impact?

Attackers could forge malicious HTTP requests that will alter unexpected properties of persistent objects. This can lead to unauthorized modifications of the entity’s state. This is known as a mass assignment attack.

Depending on the affected objects and properties, the consequences can vary.

Privilege escalation

If the affected object is used to store the client’s identity or permissions, the attacker could alter it to change their entitlement on the application. This can lead to horizontal or vertical privilege escalation.

Security checks bypass

Because persistent objects are modified directly without prior logic, attackers could exploit this issue to bypass security measures otherwise enforced by the application. For example, an attacker might be able to change their e-mail address to an invalid one by directly setting it without going through the application’s email validation process.

The same could also apply to passwords that attackers could change without complexity validation or knowledge of their current value.

How to fix it in Java EE

Code examples

The following code is vulnerable to a mass assignment attack because it allows modifying the User persistent entities thanks to maliciously forged Wish object properties.

Noncompliant code example

import javax.persistence.Entity;

@Entity
public class Wish {
  Long productId;
  Long quantity;
  Client client;
}

@Entity
public class Client {
  String clientId;
  String name;
  String password;
}

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;

@Controller
public class PurchaseOrderController {

  @RequestMapping(path = "/saveForLater", method = RequestMethod.POST)
  public String saveForLater(Wish wish) { // Noncompliant
    session.save(wish);
  }
}

Compliant solution

public class WishDTO {
  Long productId;
  Long quantity;
  Long clientId;
}

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;

@Controller
public class PurchaseOrderController {

  @RequestMapping(path = "/saveForLater", method = RequestMethod.POST)
  public String saveForLater(WishDTO wish) {
    Wish persistentWish = new Wish();
    persistentWish.productId = wish.productId
    persistentWish.quantity = wish.quantity
    persistentWish.client = getClientById(with.clientId)
    session.save(persistentWish);
  }
}

How does this work?

The compliant code implements a Data Transfer Object (DTO) layer. Instead of accepting a persistent Wish entity as a parameter, the vulnerable method accepts a WishDTO object with a safe, minimal set of properties. It then instantiates a persistent entity and initializes it based on the DTO properties' values. The resulting object can safely be persisted in the database.

Resources

Documentation

Standards

Articles & blog posts

OWASP O2 Platform Blog - Two Security Vulnerabilities in the Spring Framework’s MVC

java:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in Java JWT

Code examples

The following code contains examples of JWT encoding and decoding without a strong cipher algorithm.

Noncompliant code example

import io.jsonwebtoken.Jwts;

public void encode() {
    Jwts.builder()
        .setSubject(USER_LOGIN)
        .compact(); // Noncompliant
}
import io.jsonwebtoken.Jwts;

public void decode() {
    Jwts.parser()
        .setSigningKey(SECRET_KEY)
        .parse(token)
        .getBody(); // Noncompliant
}

Compliant solution

import io.jsonwebtoken.Jwts;

public void encode() {
    Jwts.builder()
        .setSubject(USER_LOGIN)
        .signWith(SignatureAlgorithm.HS256, SECRET_KEY)
        .compact();
}

When using Jwts.parser(), make sure to call parseClaimsJws instead of parse as it throws exceptions for invalid or missing signatures.

import io.jsonwebtoken.Jwts;

public void decode() {
    Jwts.parser()
        .setSigningKey(SECRET_KEY)
        .parseClaimsJws(token)
        .getBody();
}

How does this work?

Always sign your tokens

The foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created.

Choose a strong cipher algorithm

It is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens.

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue, follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take when encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

java:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext.

Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution.

For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Java Cryptography Extension

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher.getInstance("AES/CBC/PKCS5Padding"); // Noncompliant
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

Example with an asymmetric cipher, RSA:

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher.getInstance("RSA/None/NoPadding"); // Noncompliant
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

Compliant solution

For the AES symmetric cipher, use the GCM mode:

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher.getInstance("AES/GCM/NoPadding");
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

For the RSA asymmetric cipher, use the Optimal Asymmetric Encryption Padding (OAEP):

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher.getInstance("RSA/ECB/OAEPWITHSHA-256ANDMGF1PADDING");
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: use authenticated encryption modes

The best-known authenticated encryption mode for AES is Galois/Counter mode (GCM).

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

java:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Java Cryptography Extension

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher des = Cipher.getInstance("DES"); // Noncompliant
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

Compliant solution

import javax.crypto.Cipher;
import java.security.NoSuchAlgorithmException;
import javax.crypto.NoSuchPaddingException;

public static void main(String[] args) {
    try {
        Cipher aes = Cipher.getInstance("AES/GCM/NoPadding");
    } catch(NoSuchAlgorithmException|NoSuchPaddingException e) {
        // ...
    }
}

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

java:S5301

ActiveMQ can send/receive JMS Object messages (ObjectMessage in ActiveMQ context) to comply with JMS specifications. Internally, ActiveMQ relies on Java’s serialization mechanism for the marshaling and unmarshalling of the messages' payload.

Applications should restrict the types that can be unserialized from JMS messages.

Why is this an issue?

When the application does not implement controls over the JMS object types, its clients could be able to force the deserialization of arbitrary objects. This may lead to deserialization injection attacks.

What is the potential impact?

Attackers will be able to force the deserialization of arbitrary objects. This process will trigger the execution of magic unmarshalling methods on the object and its properties. With a specially crafted serialized object, the attackers can exploit those magic methods to achieve malicious purposes.

While the exact impact depends on the types available in the execution context at the time of deserialization, such an attack can generally lead to the execution of arbitrary code on the application server.

Application-specific attacks

By exploiting the behavior of some of the application-defined types and objects, the attacker could manage to affect the application’s business logic. The exact consequences will depend on the application’s nature:

  • Payment bypass in an e-commerce application.
  • Privilege escalation.
  • Unauthorized users' data access.

Publicly-known exploitation

In some cases, depending on the library the application uses and their versions, there may exist publicly known deserialization attack payloads known as gadget chains. In general, they are designed to have severe consequences, such as:

  • Arbitrary code execution
  • Arbitrary file read or write
  • Server-side request forgery

Those attacks are independent of the application’s own logic and from the types it specifies.

How to fix it in Java EE

Code examples

The following code example is vulnerable to a deserialization injection attack because it allows the deserialization of arbitrary types from JMS messages.

Noncompliant code example

ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616");
factory.setTrustAllPackages(true); // Noncompliant

Compliant solution

ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory("tcp://localhost:61616");
factory.setTrustedPackages(Arrays.asList("org.mypackage1", "org.mypackage2"));

How does this work?

The noncompliant code example calls the setTrustAllPackages method that explicitly allows the deserialization of arbitrary types. On the contrary, the compliant code example, thanks to the setTrustedPackages method, defines a short list of classes allowed for the deserialization.

While defining a short list of trusted types is generally the state-of-the-art solution to avoid deserialization injection attacks, it is important to ensure that the allowed classes and packages can not be used to exploit the issue. In that case, a vulnerability would still be present.

Note that ActiveMQ, starting with version 5.12.2, forces developers to explicitly list packages that JMS messages can contain. This limits the risk of successful exploitation. In versions before that one, calling the ActiveMQConnectionFactory constructor without further configuration would leave the application at risk.

Resources

Documentation

Standards

java:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptography Extension

Code examples

Noncompliant code example

import javax.net.ssl.SSLContext;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        SSLContext.getInstance("TLSv1.1"); // Noncompliant
    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}

Compliant solution

import javax.net.ssl.SSLContext;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        SSLContext.getInstance("TLSv1.2");
    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

java:S4544

Using unsafe Jackson deserialization configuration is security-sensitive. It has led in the past to the following vulnerabilities:

When Jackson is configured to allow Polymorphic Type Handling (aka PTH), formerly known as Polymorphic Deserialization, "deserialization gadgets" may allow an attacker to perform remote code execution.

This rule raises an issue when:

  • enableDefaultTyping() is called on an instance of com.fasterxml.jackson.databind.ObjectMapper or org.codehaus.jackson.map.ObjectMapper.
  • or when the annotation @JsonTypeInfo is set at class, interface or field levels and configured with use = JsonTypeInfo.Id.CLASS or use = Id.MINIMAL_CLASS.

Ask Yourself Whether

  • You configured the Jackson deserializer as mentioned above.
  • The serialized data might come from an untrusted source.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use the latest patch versions of jackson-databind blocking the already discovered "deserialization gadgets".
  • Avoid using the default typing configuration: ObjectMapper.enableDefaultTyping().
  • If possible, use @JsonTypeInfo(use = Id.NAME) instead of @JsonTypeInfo(use = Id.CLASS) or @JsonTypeInfo(use = Id. MINIMAL_CLASS) and so rely on @JsonTypeName and @JsonSubTypes.

Sensitive Code Example

ObjectMapper mapper = new ObjectMapper();
mapper.enableDefaultTyping(); // Sensitive
@JsonTypeInfo(use = Id.CLASS) // Sensitive
abstract class PhoneNumber {
}

See

java:S5876

An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled.

Why is this an issue?

Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:

  • When a user visits a website or logs in, a session is created for them.
  • This session is assigned a unique session identifier, stored in a cookie, in local storage, or through URL parameters.
  • In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. For example, the attacker sends the victim an email containing a link with this predetermined session identifier.
  • When the victim clicks on the link, the web application does not create a new session identifier but uses this identifier known to the attacker.
  • At this point, the attacker can hijack and impersonate the victim’s session.

What is the potential impact?

Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following:

Impersonation

Once an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf.

Data Breach

If an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes.

Privilege Escalation

In some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems.

How to fix it in Spring

Code examples

In a Spring Security’s context, session fixation protection is enabled by default but can be disabled with sessionFixation().none() method. Session fixation protection can also be enabled explicitly with migrateSession() but is superfluous in most cases.

Noncompliant code example

@Override
protected void configure(HttpSecurity http) throws Exception {
   http.sessionManagement()
     .sessionFixation().none(); // Noncompliant: the existing session will continue
}

Compliant solution

@Override
protected void configure(HttpSecurity http) throws Exception {
  http.sessionManagement()
     .sessionFixation().migrateSession();
}

How does this work?

The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process.

Here’s how session fixation protection typically works:

  1. When a user visits a website or logs in, a session is created for them. This session is assigned a unique session identifier, which is stored in a cookie or passed through URL parameters.
  2. In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. This allows the attacker to potentially gain unauthorized access to the user’s session.
  3. To protect against session fixation attacks, session fixation protection mechanisms come into play during the authentication process. When a user successfully authenticates, this mechanism generates a new session identifier for the user’s session.
  4. The old session identifier, which may have been manipulated by the attacker, is invalidated and no longer associated with the user’s session. This ensures that any attempts by the attacker to use the fixed session identifier are rendered ineffective.
  5. The user is then assigned the new session identifier, which is used for subsequent requests and session tracking. This new session identifier is typically stored in a new session cookie or passed through URL parameters.

By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process.

In Spring, calling migrateSession() explicitly is not necessary in most cases, as it automatically migrates session attributes to a new session upon successful authentication. The migrateSession() method is typically explicitly used when you want to manually trigger the migration of session attributes to a new session outside of the authentication process.

Resources

Documentation

Session Fixation Attack Protection

Standards

java:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the java.util.Random class relies on a pseudorandom number generator, this class and relating java.lang.Math.random() method should not be used for security-critical applications or for protecting sensitive data. In such context, the java.security.SecureRandom class which relies on a cryptographically strong random number generator (RNG) should be used in place.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a cryptographically strong random number generator (RNG) like "java.security.SecureRandom" in place of this PRNG.
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

Random random = new Random(); // Sensitive use of Random
byte bytes[] = new byte[20];
random.nextBytes(bytes); // Check if bytes is used for hashing, encryption, etc...

Compliant Solution

SecureRandom random = new SecureRandom(); // Compliant for security-sensitive use cases
byte bytes[] = new byte[20];
random.nextBytes(bytes);

See

java:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

If you create a security-sensitive cookie in your JAVA code:

Cookie c = new Cookie(COOKIENAME, sensitivedata);
c.setHttpOnly(false);  // Sensitive: this sensitive cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability

By default the HttpOnly flag is set to false:

Cookie c = new Cookie(COOKIENAME, sensitivedata);  // Sensitive: this sensitive cookie is created with the httponly flag not defined (by default set to false) and so it can be stolen easily in case of XSS vulnerability

Compliant Solution

Cookie c = new Cookie(COOKIENAME, sensitivedata);
c.setHttpOnly(true); // Compliant: this sensitive cookie is protected against theft (HttpOnly=true)

See

java:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Java Cryptography Extension

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");
        keyPairGenerator.initialize(1024); // Noncompliant

    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}

Here is an example of a private key generation with AES:

import java.security.KeyGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        KeyGenerator keyGenerator = KeyGenerator.getInstance("AES");
        keyGenerator.initialize(64); // Noncompliant

    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name:

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidAlgorithmParameterException;
import java.security.spec.ECGenParameterSpec;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator    = KeyPairGenerator.getInstance("EC");
        ECGenParameterSpec ellipticCurveName = new ECGenParameterSpec("secp112r1"); // Noncompliant
        keyPairGenerator.initialize(ellipticCurveName);

    } catch (NoSuchAlgorithmException | InvalidAlgorithmParameterException e) {
        // ...
    }
}

Compliant solution

import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");
        keyPairGenerator.initialize(2048);

    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("AES");
        keyPairGenerator.initialize(128);

    } catch (NoSuchAlgorithmException e) {
        // ...
    }
}
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.InvalidAlgorithmParameterException;
import java.security.spec.ECGenParameterSpec;

public static void main(String[] args) {
    try {
        KeyPairGenerator keyPairGenerator    = KeyPairGenerator.getInstance("EC");
        ECGenParameterSpec ellipticCurveName = new ECGenParameterSpec("secp256r1");
        keyPairGenerator.initialize(ellipticCurveName);

    } catch (NoSuchAlgorithmException | InvalidAlgorithmParameterException e) {
        // ...
    }
}

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms is mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC-based algorithms.

Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:

  • NIST P curves with a size of at least 224 bits, e.g. secp256r1.
  • Curve25519, generally known as ed25519 or x25519 depending on its application.
  • Curve448.
  • Brainpool curves with a size of at least 224 bits, e.g. brainpoolP224r1

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

java:S2254

This function uses a session ID that is supplied by the client. Because of this, the ID may not be valid or might even be spoofed.

Why is this an issue?

According to the API documentation of the HttpServletRequest.getRequestedSessionId() method:

Returns the session ID specified by the client. This may not be the same as the ID of the current valid session for this request. If the client did not specify a session ID, this method returns null.

The session ID it returns is either transmitted through a cookie or a URL parameter. This allows an end user to manually update the value of this session ID in an HTTP request.

Due to the ability of the end-user to manually change the value, the session ID in the request should only be used by a servlet container (e.g. Tomcat or Jetty) to see if the value matches the ID of an existing session. If it does not, the user should be considered unauthenticated.

What is the potential impact?

Using a client-supplied session ID to manage sessions on the server side can potentially have an impact on the security of the application.

Impersonation (through session fixation)

If an attacker succeeds in fixing a user’s session to a session identifier that they know, then they can impersonate this victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf.

How to fix it in Java EE

Code examples

In both examples, a session ID is used to check whether a user’s session is still active. In the noncompliant example, the session ID supplied by the user is used. In the compliant example, the session ID defined by the server is used instead.

Noncompliant code example

if (isActiveSession(request.getRequestedSessionId())) { // Noncompliant
    // ...
}

Compliant solution

if (isActiveSession(request.getSession().getId())) {
    // ...
}

How does this work?

The noncompliant example uses HttpServletRequest.getRequestedSessionId() to retrieve a session ID. This ID is then used to verify if the given session is still active. As this value is given by a user, this value is not guaranteed to be a valid ID.

The compliant example instead uses the server’s session ID to verify if the session is active. Additionally, getSession() will create a new session if the user’s request does not contain a valid ID.

Resources

Documentation

Standards

java:S2257

The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Standard algorithms like SHA-256, SHA-384, SHA-512, …​ should be used instead.

This rule tracks creation of java.security.MessageDigest subclasses.

Recommended Secure Coding Practices

  • Use a standard algorithm instead of creating a custom one.

Sensitive Code Example

public class MyCryptographicAlgorithm extends MessageDigest {
  ...
}

Compliant Solution

MessageDigest digest = MessageDigest.getInstance("SHA-256");

See

java:S4433

Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:

  • Anonymous Authentication
  • Unauthenticated Authentication
  • Name/Password Authentication

A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials.

Why is this an issue?

When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory.

What is the potential impact?

An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores.

Authentication bypass

If attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider.

In such a case, all users configured in the directory might see their identity and privileges taken over.

Sensitive information leak

If attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information.

Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider.

If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law.

How to fix it

Code examples

The following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism.

Noncompliant code example

// Set up the environment for creating the initial context
Hashtable<String, Object> env = new Hashtable<String, Object>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://localhost:389/o=JNDITutorial");

// Use anonymous authentication
env.put(Context.SECURITY_AUTHENTICATION, "none"); // Noncompliant

// Create the initial context
DirContext ctx = new InitialDirContext(env);

Compliant solution

// Set up the environment for creating the initial context
Hashtable<String, Object> env = new Hashtable<String, Object>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://localhost:389/o=Example");

// Use simple authentication
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, "cn=local, ou=Unit, o=Example");
env.put(Context.SECURITY_CREDENTIALS, getLDAPPassword());

// Create the initial context
DirContext ctx = new InitialDirContext(env);

Resources

Documentation

Standards

java:S4434

JNDI supports the deserialization of objects from LDAP directories, which can lead to remote code execution.

This rule raises an issue when an LDAP search query is executed with SearchControls configured to allow deserialization.

Ask Yourself Whether

  • The application connects to an untrusted LDAP directory.
  • User-controlled objects can be stored in the LDAP directory.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable deserialization of LDAP objects.

Sensitive Code Example

DirContext ctx = new InitialDirContext();
// ...
ctx.search(query, filter,
        new SearchControls(scope, countLimit, timeLimit, attributes,
            true, // Noncompliant; allows deserialization
            deref));

Compliant Solution

DirContext ctx = new InitialDirContext();
// ...
ctx.search(query, filter,
        new SearchControls(scope, countLimit, timeLimit, attributes,
            false, // Compliant
            deref));

See

java:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

MessageDigest md1 = MessageDigest.getInstance("SHA");  // Sensitive:  SHA is not a standard name, for most security providers it's an alias of SHA-1
MessageDigest md2 = MessageDigest.getInstance("SHA1");  // Sensitive

Compliant Solution

MessageDigest md1 = MessageDigest.getInstance("SHA-512"); // Compliant

See

java:S4792

This rule is deprecated, and will eventually be removed.

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

This rule supports the following libraries: Log4J, java.util.logging and Logback

// === Log4J 2 ===
import org.apache.logging.log4j.core.config.builder.api.ConfigurationBuilderFactory;
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.core.*;
import org.apache.logging.log4j.core.config.*;

// Sensitive: creating a new custom configuration
abstract class CustomConfigFactory extends ConfigurationFactory {
    // ...
}

class A {
    void foo(Configuration config, LoggerContext context, java.util.Map<String, Level> levelMap,
            Appender appender, java.io.InputStream stream, java.net.URI uri,
            java.io.File file, java.net.URL url, String source, ClassLoader loader, Level level, Filter filter)
            throws java.io.IOException {
        // Creating a new custom configuration
        ConfigurationBuilderFactory.newConfigurationBuilder();  // Sensitive

        // Setting loggers level can result in writing sensitive information in production
        Configurator.setAllLevels("com.example", Level.DEBUG);  // Sensitive
        Configurator.setLevel("com.example", Level.DEBUG);  // Sensitive
        Configurator.setLevel(levelMap);  // Sensitive
        Configurator.setRootLevel(Level.DEBUG);  // Sensitive

        config.addAppender(appender); // Sensitive: this modifies the configuration

        LoggerConfig loggerConfig = config.getRootLogger();
        loggerConfig.addAppender(appender, level, filter); // Sensitive
        loggerConfig.setLevel(level); // Sensitive

        context.setConfigLocation(uri); // Sensitive

        // Load the configuration from a stream or file
        new ConfigurationSource(stream);  // Sensitive
        new ConfigurationSource(stream, file);  // Sensitive
        new ConfigurationSource(stream, url);  // Sensitive
        ConfigurationSource.fromResource(source, loader);  // Sensitive
        ConfigurationSource.fromUri(uri);  // Sensitive
    }
}
// === java.util.logging ===
import java.util.logging.*;

class M {
    void foo(LogManager logManager, Logger logger, java.io.InputStream is, Handler handler)
            throws SecurityException, java.io.IOException {
        logManager.readConfiguration(is); // Sensitive

        logger.setLevel(Level.FINEST); // Sensitive
        logger.addHandler(handler); // Sensitive
    }
}
// === Logback ===
import ch.qos.logback.classic.util.ContextInitializer;
import ch.qos.logback.core.Appender;
import ch.qos.logback.classic.joran.JoranConfigurator;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.classic.*;

class M {
    void foo(Logger logger, Appender<ILoggingEvent> fileAppender) {
        System.setProperty(ContextInitializer.CONFIG_FILE_PROPERTY, "config.xml"); // Sensitive
        JoranConfigurator configurator = new JoranConfigurator(); // Sensitive

        logger.addAppender(fileAppender); // Sensitive
        logger.setLevel(Level.DEBUG); // Sensitive
    }
}

Exceptions

Log4J 1.x is not covered as it has reached end of life.

See

java:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Apache Commons Email

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled because setSSLCheckServerIdentity is omitted. To enable validation, set it to true.

Noncompliant code example

import org.apache.commons.mail.DefaultAuthenticator;
import org.apache.commons.mail.Email;
import org.apache.commons.mail.SimpleEmail;

public void sendMail(String message) {
    Email email = new SimpleEmail();

    email.setMsg(message);
    email.setSmtpPort(465);
    email.setAuthenticator(new DefaultAuthenticator(username, password));
    email.setSSLOnConnect(true); // Noncompliant

    email.send();
}

Compliant solution

import org.apache.commons.mail.DefaultAuthenticator;
import org.apache.commons.mail.Email;
import org.apache.commons.mail.SimpleEmail;

public void sendMail(String message) {
    Email email = new SimpleEmail();

    email.setMsg(message);
    email.setSmtpPort(465);
    email.setAuthenticator(new DefaultAuthenticator(username, password));
    email.setSSLCheckServerIdentity(true);
    email.setSSLOnConnect(true);

    email.send();
}

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Here is a sample command to import a certificate to the Java trust store:

keytool -import -alias myserver -file myserver.crt -keystore cacerts

Resources

Standards

java:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in Java SE

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); // Noncompliant

Compliant solution

Protection from XXE can be done in several different ways. Choose one depending on how the affected parser object is used in your code.

1. The first way is to completely disable DOCTYPE declarations:

// Applicable to:
// - DocumentBuilderFactory
// - SAXParserFactory
// - SchemaFactory
factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true);

// For XMLInputFactory:
factory.setProperty(XMLInputFactory.SUPPORT_DTD, false);

2. Disable external entity declarations completely:

// Applicable to:
// - DocumentBuilderFactory
// - SAXParserFactory
factory.setFeature("http://xml.org/sax/features/external-general-entities", false);
factory.setFeature("http://xml.org/sax/features/external-parameter-entities", false);

// For XMLInputFactory:
factory.setProperty(XMLInputFactory.IS_SUPPORTING_EXTERNAL_ENTITIES, Boolean.FALSE);

3. Prohibit the use of all protocols by external entities:

// `setAttribute` variant, applicable to:
// - DocumentBuilderFactory
// - TransformerFactory
factory.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, "");
factory.setAttribute(XMLConstants.ACCESS_EXTERNAL_SCHEMA, "");

// `setProperty` variant, applicable to:
// - XMLInputFactory
// - SchemaFactory
factory.setProperty(XMLConstants.ACCESS_EXTERNAL_DTD, "");
factory.setProperty(XMLConstants.ACCESS_EXTERNAL_SCHEMA, "");

// For SAXParserFactory, the prohibition is done on child objects:
SAXParser parser = factory.newSAXParser();
parser.setProperty(XMLConstants.ACCESS_EXTERNAL_DTD, "");
parser.setProperty(XMLConstants.ACCESS_EXTERNAL_SCHEMA, "");

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Going the extra mile

Disable entity expansion

Specifically for DocumentBuilderFactory, it is possible to disable the entity expansion. Note, however, that this does not prevent the retrieval of external entities.

factory.setExpandEntityReferences(false);

Resources

Standards

java:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

    public void setPermissions(String filePath) {
        Set<PosixFilePermission> perms = new HashSet<PosixFilePermission>();
        // user permission
        perms.add(PosixFilePermission.OWNER_READ);
        perms.add(PosixFilePermission.OWNER_WRITE);
        perms.add(PosixFilePermission.OWNER_EXECUTE);
        // group permissions
        perms.add(PosixFilePermission.GROUP_READ);
        perms.add(PosixFilePermission.GROUP_EXECUTE);
        // others permissions
        perms.add(PosixFilePermission.OTHERS_READ); // Sensitive
        perms.add(PosixFilePermission.OTHERS_WRITE); // Sensitive
        perms.add(PosixFilePermission.OTHERS_EXECUTE); // Sensitive

        Files.setPosixFilePermissions(Paths.get(filePath), perms);
    }
    public void setPermissionsUsingRuntimeExec(String filePath) {
        Runtime.getRuntime().exec("chmod 777 file.json"); // Sensitive
    }
    public void setOthersPermissionsHardCoded(String filePath ) {
        Files.setPosixFilePermissions(Paths.get(filePath), PosixFilePermissions.fromString("rwxrwxrwx")); // Sensitive
    }

Compliant Solution

On operating systems that implement POSIX standard. This will throw a UnsupportedOperationException on Windows.

    public void setPermissionsSafe(String filePath) throws IOException {
        Set<PosixFilePermission> perms = new HashSet<PosixFilePermission>();
        // user permission
        perms.add(PosixFilePermission.OWNER_READ);
        perms.add(PosixFilePermission.OWNER_WRITE);
        perms.add(PosixFilePermission.OWNER_EXECUTE);
        // group permissions
        perms.add(PosixFilePermission.GROUP_READ);
        perms.add(PosixFilePermission.GROUP_EXECUTE);
        // others permissions removed
        perms.remove(PosixFilePermission.OTHERS_READ); // Compliant
        perms.remove(PosixFilePermission.OTHERS_WRITE); // Compliant
        perms.remove(PosixFilePermission.OTHERS_EXECUTE); // Compliant

        Files.setPosixFilePermissions(Paths.get(filePath), perms);
    }

See

java:S3752

An HTTP method is safe when used to perform a read-only operation, such as retrieving information. In contrast, an unsafe HTTP method is used to change the state of an application, for instance to update a user’s profile on a web application.

Common safe HTTP methods are GET, HEAD, or OPTIONS.

Common unsafe HTTP methods are POST, PUT and DELETE.

Allowing both safe and unsafe HTTP methods to perform a specific operation on a web application could impact its security, for example CSRF protections are most of the time only protecting operations performed by unsafe HTTP methods.

Ask Yourself Whether

  • HTTP methods are not defined at all for a route/controller of the application.
  • Safe HTTP methods are defined and used for a route/controller that can change the state of an application.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

For all the routes/controllers of an application, the authorized HTTP methods should be explicitly defined and safe HTTP methods should only be used to perform read-only operations.

Sensitive Code Example

@RequestMapping("/delete_user")  // Sensitive: by default all HTTP methods are allowed
public String delete1(String username) {
// state of the application will be changed here
}

@RequestMapping(path = "/delete_user", method = {RequestMethod.GET, RequestMethod.POST}) // Sensitive: both safe and unsafe methods are allowed
String delete2(@RequestParam("id") String id) {
// state of the application will be changed here
}

Compliant Solution

@RequestMapping("/delete_user", method = RequestMethod.POST)  // Compliant
public String delete1(String username) {
// state of the application will be changed here
}

@RequestMapping(path = "/delete_user", method = RequestMethod.POST) // Compliant
String delete2(@RequestParam("id") String id) {
// state of the application will be changed here
}

See

java:S4601

Spring Framework, and, more precisely, the Spring Security component, allows setting up access control checks at the URI level. This is done by adding request matchers to the security configuration, each authorizing access to some resources depending on the incoming request entitlement.

Similarly to firewall filtering rules, the order in which those matchers are defined is security relevant.

Why is this an issue?

Configured URL matchers are considered in the order they are declared. Especially, for a given resource, if a looser filter is defined before a stricter one, only the less secure configuration will apply. No request will ever reach the stricter rule.

This rule raises an issue when:

  • A URL pattern ending with ** precedes another one having the same prefix. E.g. /admin/** is defined before /admin/example/**
  • A pattern without wildcard characters is preceded by another one that matches it. E.g.: /page-index/db is defined after /page*/**

What is the potential impact?

Access control rules that have been defined but cannot be applied generally indicate an error in the filtering process. In most cases, this will have consequences on the application’s authorization and authentication mechanisms.

Authentication bypass

When the ignored access control rule is supposed to enforce the authentication on a resource, the consequence is a bypass of the authentication for that resource. Depending on the scope of the ignored rule, a single feature or whole sections of the application can be left unprotected.

Attackers could take advantage of such an issue to access the affected features without prior authentication, which may impact the confidentiality or integrity of sensitive, business, or personal data.

Privilege escalation

When the ignored access control rule is supposed to verify the role of an authenticated user, the consequence is a privilege escalation or authorization bypass. An authenticated user with low privileges on the application will be able to access more critical features or sections of the application.

This could have financial consequences if the accessed features are normally accessed by paying users. It could also impact the confidentiality or integrity of sensitive, business, or personal data, depending on the features.

How to fix it in Spring

Code examples

The following code is vulnerable because it defines access control configuration in the wrong order.

Noncompliant code example

protected void configure(HttpSecurity http) throws Exception {
    http.authorizeRequests()
      .antMatchers("/resources/**", "/signup", "/about").permitAll()
      .antMatchers("/admin/**").hasRole("ADMIN")
      .antMatchers("/admin/login").permitAll() // Noncompliant
      .antMatchers("/**", "/home").permitAll()
      .antMatchers("/db/**").access("hasRole('ADMIN') and hasRole('DBA')") // Noncompliant
      .and().formLogin().loginPage("/login").permitAll().and().logout().permitAll();
  }

Compliant solution

  protected void configure(HttpSecurity http) throws Exception {
    http.authorizeRequests()
      .antMatchers("/resources/**", "/signup", "/about").permitAll()
      .antMatchers("/admin/login").permitAll()
      .antMatchers("/admin/**").hasRole("ADMIN")
      .antMatchers("/db/**").access("hasRole('ADMIN') and hasRole('DBA')")
      .antMatchers("/**", "/home").permitAll()
      .and().formLogin().loginPage("/login").permitAll().and().logout().permitAll();
  }

Resources

Documentation

Standards

java:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

String ip = "192.168.12.42"; // Sensitive
Socket socket = new Socket(ip, 6667);

Compliant Solution

String ip = System.getenv("IP_ADDRESS"); // Compliant
Socket socket = new Socket(ip, 6667);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

java:S2647

This rule is deprecated, and will eventually be removed.

Basic authentication is a vulnerable method of user authentication that should be avoided. It functions by transmitting a Base64 encoded username and password. As Base64 is easy to recognize and reverse, sensitive data may be leaked this way.

Why is this an issue?

Basic authentication is a simple and widely used method of user authentication for HTTP requests. When a client sends a request to a server that requires authentication, the client includes the username and password (concatenated together and Base64 encoded) in the "Authorization" header of the HTTP request. The server verifies the credentials and grants access if they are valid. Every request sent to the server to a protected endpoint must include these credentials.

Basic authentication is considered insecure for several reasons:

  • It transmits user credentials in plain text, making them susceptible to interception and eavesdropping.
  • It relies solely on the server’s ability to verify the provided credentials. There is no mechanism for additional security measures like multi-factor authentication or account lockouts after multiple failed login attempts.
  • It does not provide a way to manage user sessions securely. The client typically includes the credentials in every request, which creates more opportunities for an attacker to steal these credentials.

These security limitations make basic authentication an insecure choice for authentication or authorization over HTTP.

What is the potential impact?

Basic authentication transmits passwords in plain text, which makes it vulnerable to interception by attackers.

Session hijacking and man-in-the-middle attack

If an attacker gains access to the network traffic, they can easily capture the username and password. Basic authentication does not provide any mechanism to protect against session hijacking attacks. Once a user is authenticated, the session identifier (the username and password) is sent in clear text with each subsequent request. If attackers can intercept one request, they can use it to impersonate the authenticated user, gaining unauthorized access to their account and potentially performing malicious actions.

Brute-force attacks

Basic authentication does not have any built-in protection against brute-force attacks. Attackers can repeatedly guess passwords until they find the correct one, especially if weak or commonly used passwords are used. This can lead to unauthorized access to user accounts and potential data breaches.

How to fix it in Java SE

Code examples

The following code uses basic authentication to send out an HTTP request to a protected endpoint.

Noncompliant code example

String encoded = Base64.getEncoder().encodeToString("login:passwd".getBytes());
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("POST");
conn.setDoOutput(true);
conn.setRequestProperty("Authorization", "Basic " + encoded); // Noncompliant

Compliant solution

// An access token should be retrieved before the HTTP request
String accessToken = System.getenv("ACCESS_TOKEN");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("POST");
conn.setDoOutput(true);
conn.setRequestProperty("Authorization", "Bearer " + accessToken);

How does this work?

Token-based authentication and OAuth

Token-based authentication is a safer alternative than basic authentication. A unique token is generated upon successful authentication and sent to the client, which is then included in subsequent requests. Therefore, it eliminates the need to transmit sensitive credentials with each request. OAuth also works by authenticating users via tokens. It gives even more flexibility on top of this by offering scopes, which limit an application’s access to a user’s account.

Additionally, both token-based authentication and OAuth support mechanisms for token expiration, revocation, and refresh. This gives more flexibility than basic authentication, as compromised tokens carry much less risk than a compromised password.

SSL encryption for HTTP requests

With basic authentication, user credentials are transmitted in plain text, which makes them vulnerable to interception and eavesdropping. However, when HTTPS is employed, the data is encrypted before transmission, making it significantly more difficult for attackers to intercept and decipher the credentials. If no other form of authentication is possible for this code, then every request must be sent over HTTPS to ensure credentials are kept safe.

Resources

Documentation

Standards

java:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Java Cryptography Extension

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by overriding X509TrustManager with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

class TrustAllManager implements X509TrustManager {

    @Override
    public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException {  // Noncompliant
    }

    @Override
    public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException { // Noncompliant
    }

    @Override
    public X509Certificate[] getAcceptedIssuers() {
        return null;
    }
}

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Here is a sample command to import a certificate to the Java trust store:

keytool -import -alias myserver -file myserver.crt -keystore cacerts

Resources

Standards

java:S2658

This rule is deprecated; use S6173 instead.

Why is this an issue?

Dynamically loaded classes could contain malicious code executed by a static class initializer. I.E. you wouldn’t even have to instantiate or explicitly invoke methods on such classes to be vulnerable to an attack.

This rule raises an issue for each use of dynamic class loading.

Noncompliant code example

String className = System.getProperty("messageClassName");
Class clazz = Class.forName(className);  // Noncompliant

Resources

java:S5804

User enumeration refers to the ability to guess existing usernames in a web application database. This can happen, for example, when using "sign-in/sign-on/forgot password" functionalities of a website.

When an user tries to "sign-in" to a website with an incorrect username/login, the web application should not disclose that the username doesn’t exist with a message similar to "this username is incorrect", instead a generic message should be used like "bad credentials", this way it’s not possible to guess whether the username or password was incorrect during the authentication.

If a user-management feature discloses information about the existence of a username, attackers can use brute force attacks to retrieve a large amount of valid usernames that will impact the privacy of corresponding users and facilitate other attacks (phishing, password guessing etc …​).

Ask Yourself Whether

  • The application discloses that a username exists in its database: most of the time it’s possible to avoid this kind of leak except for the "registration/sign-on" part of a website because in this case the user must choose a valid username (not already taken by another user).
  • There is no rate limiting and CAPTCHA protection in place for requests involving a username.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

When a user performs a request involving a username, it should not be possible to spot differences between a valid and incorrect username:

  • Error messages should be generic and not disclose if the username is valid or not.
  • The response time must be similar for a valid username or not.
  • CAPTCHA and other rate limiting solutions should be implemented.

Sensitive Code Example

In a Spring-security web application the username leaks when:

  • The string used as argument of loadUserByUsername method is used in an exception message:
public String authenticate(String username, String password) {
  // ....
  MyUserDetailsService s1 = new MyUserDetailsService();
  MyUserPrincipal u1 = s1.loadUserByUsername(username);

  if(u1 == null) {
    throw new BadCredentialsException(username+" doesn't exist in our database"); // Sensitive
  }
  // ....
}
public String authenticate(String username, String password) {
  // ....
  if(user == null) {
      throw new UsernameNotFoundException("user not found"); // Sensitive
  }
  // ....
}
DaoAuthenticationProvider daoauth = new DaoAuthenticationProvider();
daoauth.setUserDetailsService(new MyUserDetailsService());
daoauth.setPasswordEncoder(new BCryptPasswordEncoder());
daoauth.setHideUserNotFoundExceptions(false); // Sensitive
builder.authenticationProvider(daoauth);

Compliant Solution

In a Spring-security web application:

  • the same message should be used regardless of whether it is the wrong user or password:
public String authenticate(String username, String password) throws AuthenticationException {
  Details user = null;
  try {
    user = loadUserByUsername(username);
  } catch (UsernameNotFoundException | DataAccessException e) {
    // Hide this exception reason to not disclose that the username doesn't exist
  }
  if (user == null || !user.isPasswordCorrect(password)) {
     // User should not be able to guess if the bad credentials message is related to the username or the password
    throw new BadCredentialsException("Bad credentials");
  }
}
DaoAuthenticationProvider daoauth = new DaoAuthenticationProvider();
daoauth.setUserDetailsService(new MyUserDetailsService());
daoauth.setPasswordEncoder(new BCryptPasswordEncoder());
daoauth.setHideUserNotFoundExceptions(true); // Compliant
builder.authenticationProvider(daoauth);

See

java:S5808

When granting users access to resources of an application, such an authorization should be based on strong decisions. For instance, a user may be authorized to access a resource only if they are authenticated, or if they have the correct role and privileges.

Why is this an issue?

Access control is a critical aspect of web frameworks that ensures proper authorization and restricts access to sensitive resources or actions. To enable access control, web frameworks offer components that are responsible for evaluating user permissions and making access control decisions. They might examine the user’s credentials, such as roles or privileges, and compare them against predefined rules or policies to determine whether the user should be granted access to a specific resource or action.

Conventionally, these checks should never grant access to every request received. If an endpoint or component is meant to be public, then it should be ignored by access control components. Conversely, if an endpoint should deny some users from accessing it, then access control has to be configured correctly for this endpoint.

Granting unrestricted access to all users can lead to security vulnerabilities and potential misuse of critical functionalities. It is important to carefully assess access decisions based on factors such as user roles, resource sensitivity, and business requirements. Implementing a robust and granular access control mechanism is crucial for the security and integrity of the web application itself and its surrounding environment.

What is the potential impact?

Not verifying user access strictly can introduce significant security risks. Some of the most prominent risks are listed below. Depending on the use case, it is very likely that other risks are introduced on top of the ones listed.

Unauthorized access

As the access of users is not checked strictly, it becomes very easy for an attacker to gain access to restricted areas or functionalities, potentially compromising the confidentiality, integrity, and availability of sensitive resources. They may exploit this access to perform malicious actions, such as modifying or deleting data, impersonating legitimate users, or gaining administrative privileges, ultimately compromising the security of the system.

Theft of sensitive data

Theft of sensitive data can result from incorrect access control if attackers manage to gain access to databases, file systems, or other storage mechanisms where sensitive data is stored. This can lead to the theft of personally identifiable information (PII), financial data, intellectual property, or other confidential information. The stolen data can be used for various malicious purposes, such as identity theft, financial fraud, or selling the data on the black market, causing significant harm to individuals and organizations affected by the breach.

How to fix it in Spring

Code examples

Noncompliant code example

The vote method of an AccessDecisionVoter implementation is not compliant when it returns only an affirmative decision (ACCESS_GRANTED) or abstains to make a decision (ACCESS_ABSTAIN):

public class WeakNightVoter implements AccessDecisionVoter {
    @Override
    public int vote(Authentication authentication, Object object, Collection collection) {
        Calendar calendar = Calendar.getInstance();
        int currentHour = calendar.get(Calendar.HOUR_OF_DAY);

        if (currentHour >= 8 && currentHour <= 19) {
            return ACCESS_GRANTED;
        }

        return ACCESS_ABSTAIN; // Noncompliant: when users connect during the night, no decision is made
    }
}

The hasPermission method of a PermissionEvaluator implementation is not compliant when it doesn’t return false:

public class MyPermissionEvaluator implements PermissionEvaluator {
    @Override
    public boolean hasPermission(Authentication authentication, Object targetDomainObject, Object permission) {
        Object user = authentication.getPrincipal();

        if (user.getRole().equals(permission)) {
              return true;
        }

        return true; // Noncompliant
    }
}

Compliant solution

The vote method of an AccessDecisionVoter implementation should return a negative decision (ACCESS_DENIED):

public class StrongNightVoter implements AccessDecisionVoter {
    @Override
    public int vote(Authentication authentication, Object object, Collection collection) {
        Calendar calendar = Calendar.getInstance();
        int currentHour = calendar.get(Calendar.HOUR_OF_DAY);

        if (currentHour >= 8 && currentHour <= 19) {
            return ACCESS_GRANTED;
        }

        return ACCESS_DENIED; // Users are not allowed to connect during the night
    }
}

The hasPermission method of a PermissionEvaluator implementation should return false:

public class MyPermissionEvaluator implements PermissionEvaluator {
    @Override
    public boolean hasPermission(Authentication authentication, Object targetDomainObject, Object permission) {
        Object user = authentication.getPrincipal();

        if (user.getRole().equals(permission)) {
              return true;
        }

        return false;
    }
}

Resources

Standards

java:S6263

In AWS, long-term access keys will be valid until you manually revoke them. This makes them highly sensitive as any exposure can have serious consequences and should be used with care.

This rule will trigger when encountering an instantiation of com.amazonaws.auth.BasicAWSCredentials.

Ask Yourself Whether

  • The access key is used directly in an application or AWS CLI script running on an Amazon EC2 instance.
  • Cross-account access is needed.
  • The access keys need to be embedded within a mobile application.
  • Existing identity providers (SAML 2.0, on-premises identity store) already exists.

For more information, see Use IAM roles instead of long-term access keys.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Consider using IAM roles or other features of the AWS Security Token Service that provide temporary credentials, limiting the risks.

Sensitive Code Example

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
// ...

AWSCredentials awsCredentials = new BasicAWSCredentials(accessKeyId, secretAccessKey);

Compliant Solution

Example for AWS STS (see Getting Temporary Credentials with AWS STS).

BasicSessionCredentials sessionCredentials = new BasicSessionCredentials(
   session_creds.getAccessKeyId(),
   session_creds.getSecretAccessKey(),
   session_creds.getSessionToken());

See

java:S6362

WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered. In the context of a WebView, JavaScript code can exfiltrate local files that might be sensitive or even worse, access exposed functions of the application that can result in more severe vulnerabilities such as code injection. Thus JavaScript support should not be enabled for WebViews unless it is absolutely necessary and the authenticity of the web resources can be guaranteed.

Ask Yourself Whether

  • The WebWiew only renders static web content that does not require JavaScript code to be executed.
  • The WebView contains untrusted data that could cause harm when rendered.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable JavaScript support for WebViews unless it is necessary to execute JavaScript code. Only trusted pages should be rendered.

Sensitive Code Example

import android.webkit.WebView;

WebView webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setJavaScriptEnabled(true); // Sensitive

Compliant Solution

import android.webkit.WebView;

WebView webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setJavaScriptEnabled(false);

See

java:S6363

WebViews can be used to display web content as part of a mobile application. A browser engine is used to render and display the content. Like a web application, a mobile application that uses WebViews can be vulnerable to Cross-Site Scripting if untrusted code is rendered.

If malicious JavaScript code in a WebView is executed this can leak the contents of sensitive files when access to local files is enabled.

Ask Yourself Whether

  • No local files have to be accessed by the Webview.
  • The WebView contains untrusted data that could cause harm when rendered.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to disable access to local files for WebViews unless it is necessary. In the case of a successful attack through a Cross-Site Scripting vulnerability the attackers attack surface decreases drastically if no files can be read out.

Sensitive Code Example

import android.webkit.WebView;

WebView webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setAllowFileAccess(true); // Sensitive
webView.getSettings().setAllowContentAccess(true); // Sensitive

Compliant Solution

import android.webkit.WebView;

WebView webView = (WebView) findViewById(R.id.webview);
webView.getSettings().setAllowFileAccess(false);
webView.getSettings().setAllowContentAccess(false);

See

java:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

File f = new File("ZipBomb.zip");
ZipFile zipFile = new ZipFile(f);
Enumeration<? extends ZipEntry> entries = zipFile.entries(); // Sensitive

while(entries.hasMoreElements()) {
  ZipEntry ze = entries.nextElement();
  File out = new File("./output_onlyfortesting.txt");
  Files.copy(zipFile.getInputStream(ze), out.toPath(), StandardCopyOption.REPLACE_EXISTING);
}

Compliant Solution

Do not rely on getsize to retrieve the size of an uncompressed entry because this method returns what is defined in the archive headers which can be forged by attackers, instead calculate the actual entry size when unzipping it:

File f = new File("ZipBomb.zip");
ZipFile zipFile = new ZipFile(f);
Enumeration<? extends ZipEntry> entries = zipFile.entries();

int THRESHOLD_ENTRIES = 10000;
int THRESHOLD_SIZE = 1000000000; // 1 GB
double THRESHOLD_RATIO = 10;
int totalSizeArchive = 0;
int totalEntryArchive = 0;

while(entries.hasMoreElements()) {
  ZipEntry ze = entries.nextElement();
  InputStream in = new BufferedInputStream(zipFile.getInputStream(ze));
  OutputStream out = new BufferedOutputStream(new FileOutputStream("./output_onlyfortesting.txt"));

  totalEntryArchive ++;

  int nBytes = -1;
  byte[] buffer = new byte[2048];
  int totalSizeEntry = 0;

  while((nBytes = in.read(buffer)) > 0) { // Compliant
      out.write(buffer, 0, nBytes);
      totalSizeEntry += nBytes;
      totalSizeArchive += nBytes;

      double compressionRatio = totalSizeEntry / ze.getCompressedSize();
      if(compressionRatio > THRESHOLD_RATIO) {
        // ratio between compressed and uncompressed data is highly suspicious, looks like a Zip Bomb Attack
        break;
      }
  }

  if(totalSizeArchive > THRESHOLD_SIZE) {
      // the uncompressed data size is too much for the application resource capacity
      break;
  }

  if(totalEntryArchive > THRESHOLD_ENTRIES) {
      // too much entries in this archive, can lead to inodes exhaustion of the system
      break;
  }
}

See

java:S6373

XML standard allows the inclusion of XML files with the xinclude element. When an XML parser component is set up with the http://apache.org/xml/features/xinclude feature, it will follow the standard and allow the inclusion of remote files.

Why is this an issue?

When the XML parser will encounter an xinclude element, it will try to load the file pointed to by the href attribute into the document. Included files can either be local files found on the file system of the application server, or remote files that are downloaded over HTTP, SMB, or other protocols, depending on the capabilities of the application and server.

The files that can be accessed that way are only limited by the entitlement of the application on the local system and the network filtering the server is subject to.

This issue is particularly severe when the XML parser is used to parse untrusted documents. For example, when user-submitted XML messages are parsed that way.

What is the potential impact?

Allowing the inclusion of arbitrary files in XML documents can have two main consequences depending on what type of file is included: local or remote.

Sensitive file disclosure

If the application allows the inclusion of arbitrary files through the use of the xinclude element, it might be used to disclose arbitrary files from the local file system. Depending on the application’s permissions on the file system, configuration files, runtime secrets, or Personally Identifiable Information could be leaked.

This is particularly true if the affected parser is used to process untrusted XML documents.

Server-side request forgery

When used to retrieve remote files, the application will send network requests to remote hosts. Moreover, it will do so from its current network location, which can have severe consequences if the application server is located on a sensitive network, such as the company corporate network or a DMZ hosting other applications.

Attackers exploiting this issue could try to access internal backend services or corporate file shares. It could allow them to access more sensitive files, bypass authentication mechanisms from frontend applications, or exploit further vulnerabilities in the local services. Note that, in some cases, the requests sent from the application can be automatically authenticated on federated locations. This is often the case in Windows environments when using Active Directory federated authentication.

How to fix it in Java SE

Code examples

The following code is vulnerable because it explicitly enables the xinclude feature.

Noncompliant code example

import javax.xml.parsers.SAXParserFactory;

SAXParserFactory factory = SAXParserFactory.newInstance();

factory.setXIncludeAware(true); // Noncompliant
factory.setFeature("http://apache.org/xml/features/xinclude", true); // Noncompliant

Compliant solution

import javax.xml.parsers.SAXParserFactory;

SAXParserFactory factory = SAXParserFactory.newInstance();

factory.setXIncludeAware(false);
factory.setFeature("http://apache.org/xml/features/xinclude", false);

Resources

Documentation

Standards

java:S6374

This rule is deprecated; use S2755 instead.

Why is this an issue?

By default XML processors attempt to load all XML schemas and DTD (their locations are defined with xsi:schemaLocation attributes and DOCTYPE declarations), potentially from an external storage such as file system or network, which may lead, if no restrictions are put in place, to server-side request forgery (SSRF) vulnerabilities.

Noncompliant code example

For DocumentBuilder, SAXParser and Schema JAPX factories:

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setValidating(true); // Noncompliant
factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant

SAXParserFactory factory = SAXParserFactory.newInstance();
factory.setValidating(true); // Noncompliant
factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant

SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
schemaFactory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant

For Dom4j library:

SAXReader xmlReader = new SAXReader(); // Noncompliant
xmlReader.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true);  // Noncompliant

For Jdom2 library:

SAXBuilder builder = new SAXBuilder();
builder.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true); // Noncompliant

Compliant solution

For DocumentBuilder, SAXParser and Schema JAPX factories:

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

SAXParserFactory factory = SAXParserFactory.newInstance();
factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
schemaFactory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

For Dom4j library:

SAXReader xmlReader = new SAXReader(); // Noncompliant
xmlReader.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

For Jdom2 library:

SAXBuilder builder = new SAXBuilder();
builder.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false);

Exceptions

This rules does not raise an issue when an EntityResolver is set.

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setValidating(true);
DocumentBuilder builder = factory.newDocumentBuilder();
builder.setEntityResolver(new MyEntityResolver());

SAXBuilder builder = new SAXBuilder();
builder.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", true);
builder.setEntityResolver(new EntityResolver());

Resources

java:S6376

XML parsers Denial of Service attacks target XML parsers, which are software components responsible for parsing and interpreting XML documents.

Why is this an issue?

XML files are complex data structures. When a malicious user is able to submit an XML file, it triggers complex processing that may overwhelm the parser. Most of the time, those complex processing are enabled by default, and XML parsers do not take preventive measures against Denial of Service attacks.

What is the potential impact?

When an attacker successfully exploits the vulnerability, it can lead to a Denial of Service (DoS) condition.

System Unavailability

Affected system becomes unresponsive or crashes, rendering it unavailable to legitimate users. This can have severe consequences, especially for critical systems that rely on continuous availability, such as web servers, APIs, or network services.

Amplification Attacks

In some cases, XML parsers Denial of Service attacks can be used as a part of larger-scale amplification attacks. By leveraging the vulnerability, attackers can generate a disproportionately large response from the targeted system, amplifying the impact of their attack. This can result in overwhelming network bandwidth and causing widespread disruption.

How to fix it in Java SE

Code examples

Noncompliant code example

import javax.xml.parsers.DocumentBuilderFactory;

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, false); // Noncompliant

Compliant solution

import javax.xml.parsers.DocumentBuilderFactory;

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setFeature(XMLConstants.FEATURE_SECURE_PROCESSING, true);

Resources

Documentation

Standards

java:S6377

XML signatures are a method used to ensure the integrity and authenticity of XML documents. However, if XML signatures are not validated securely, it can lead to potential vulnerabilities.

Why is this an issue?

Before Java 17, XML Digital Signature API does not apply restrictions on XML signature validation unless the application runs with a security manager, which is rare.

What is the potential impact

By not enforcing secure validation, the XML Digital Signature API is more susceptible to attacks such as signature spoofing and injections.

Increased Vulnerability to Signature Spoofing

By disabling secure validation, the application becomes more susceptible to signature spoofing attacks. Attackers can potentially manipulate the XML signature in a way that bypasses the validation process, allowing them to forge or tamper with the signature. This can lead to the acceptance of invalid or maliciously modified signatures, compromising the integrity and authenticity of the XML documents.

Risk of Injection Attacks

Disabling secure validation can expose the application to injection attacks. Attackers can inject malicious code or entities into the XML document, taking advantage of the weakened validation process. In some cases, it can also expose the application to denial-of-service attacks. Attackers can exploit vulnerabilities in the validation process to cause excessive resource consumption or system crashes, leading to service unavailability or disruption.

How to fix it in Java SE

Code examples

For versions of Java before 17, secure validation is disabled by default unless the application runs with a security manager, which is rare. It should be enabled explicitly by setting the org.jcp.xml.dsig.secureValidation attribute to true with the javax.xml.crypto.dsig.dom.DOMValidateContext.setProperty method.

For Java 17 and higher, secure validation is enabled by default.

Noncompliant code example

NodeList signatureElement = doc.getElementsByTagNameNS(XMLSignature.XMLNS, "Signature");

XMLSignatureFactory fac = XMLSignatureFactory.getInstance("DOM");
DOMValidateContext valContext = new DOMValidateContext(new KeyValueKeySelector(), signatureElement.item(0)); // Noncompliant
XMLSignature signature = fac.unmarshalXMLSignature(valContext);

boolean signatureValidity = signature.validate(valContext);

Compliant solution

NodeList signatureElement = doc.getElementsByTagNameNS(XMLSignature.XMLNS, "Signature");

XMLSignatureFactory fac = XMLSignatureFactory.getInstance("DOM");
DOMValidateContext valContext = new DOMValidateContext(new KeyValueKeySelector(), signatureElement.item(0));
valContext.setProperty("org.jcp.xml.dsig.secureValidation", Boolean.TRUE);
XMLSignature signature = fac.unmarshalXMLSignature(valContext);

boolean signatureValidity = signature.validate(valContext);

How does this work?

When XML Signature secure validation mode is enabled, XML Signatures are processed more securely. It enforces a number of restrictionsto to protect from XML Documents that may contain hostile constructs that can cause denial-of-service or other types of security issues.

These restrictions can protect you from XML Signatures that may contain potentially hostile constructs that can cause denial-of-service or other types of security issues.

Resources

Documentation

Standards

java:S1989

Why is this an issue?

Servlets are components in Java web development, responsible for processing HTTP requests and generating responses. In this context, exceptions are used to handle and manage unexpected errors or exceptional conditions that may occur during the execution of a servlet.

Catching exceptions within the servlet allows us to convert them into meaningful, user-friendly messages. Otherwise, failing to catch exceptions will propagate them to the servlet container, where the default error-handling mechanism may impact the overall security and stability of the server.

Possible security problems are:

  1. Vulnerability to denial-of-service attacks: Not caught exceptions can leave the servlet container in an unstable state, which can exhaust the available resources and make the system unavailable in the worst cases.
  2. Exposure of sensitive information: Exceptions handled by the servlet container, by default, expose detailed error messages or debugging information to the user, which may contain sensitive data such as stack traces, database connection, or system configuration.

Unfortunately, servlet method signatures do not force developers to handle IOException and ServletException:

public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
}

To prevent this risk, this rule enforces all exceptions to be caught within the "do*" methods of servlet classes.

How to fix it

Surround all method calls that may throw an exception with a try/catch block.

Code examples

In the following example, the getByName method may throw an UnknownHostException.

Noncompliant code example

public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
  InetAddress addr = InetAddress.getByName(request.getRemoteAddr()); // Noncompliant
  //...
}

Compliant solution

public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
  try {
    InetAddress addr = InetAddress.getByName(request.getRemoteAddr());
    //...
  }
  catch (UnknownHostException ex) {  // Compliant
    //...
  }
}

Resources

Articles & blog posts

java:S6288

Android KeyStore is a secure container for storing key materials, in particular it prevents key materials extraction, i.e. when the application process is compromised, the attacker cannot extract keys but may still be able to use them. It’s possible to enable an Android security feature, user authentication, to restrict usage of keys to only authenticated users. The lock screen has to be unlocked with defined credentials (pattern/PIN/password, biometric).

Ask Yourself Whether

  • The application requires prohibiting the use of keys in case of compromise of the application process.
  • The key material is used in the context of a highly sensitive application like a e-banking mobile app.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable user authentication (by setting setUserAuthenticationRequired to true during key generation) to use keys for a limited duration of time (by setting appropriate values to setUserAuthenticationValidityDurationSeconds), after which the user must re-authenticate.

Sensitive Code Example

Any user can use the key:

KeyGenerator keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore");

KeyGenParameterSpec builder = new KeyGenParameterSpec.Builder("test_secret_key_noncompliant", KeyProperties.PURPOSE_ENCRYPT | KeyProperties.PURPOSE_DECRYPT) // Noncompliant
    .setBlockModes(KeyProperties.BLOCK_MODE_GCM)
    .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
    .build();

keyGenerator.init(builder);

Compliant Solution

The use of the key is limited to authenticated users (for a duration of time defined to 60 seconds):

KeyGenerator keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore");

KeyGenParameterSpec builder = new KeyGenParameterSpec.Builder("test_secret_key", KeyProperties.PURPOSE_ENCRYPT | KeyProperties.PURPOSE_DECRYPT)
    .setBlockModes(KeyProperties.BLOCK_MODE_GCM)
    .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_NONE)
    .setUserAuthenticationRequired(true)
    .setUserAuthenticationParameters (60, KeyProperties.AUTH_DEVICE_CREDENTIAL)
    .build();

keyGenerator.init(builder)

See

java:S6291

Storing data locally is a common task for mobile applications. Such data includes preferences or authentication tokens for external services, among other things. There are many convenient solutions that allow storing data persistently, for example SQLiteDatabase, SharedPreferences, and Realm. By default these systems store the data unencrypted, thus an attacker with physical access to the device can read them out easily. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to password-encrypt local databases that contain sensitive information. Most systems provide secure alternatives to plain-text storage that should be used. If no secure alternative is available the data can also be encrypted manually before it is stored.

The encryption password should not be hard-coded in the application. There are different approaches how the password can be provided to encrypt and decrypt the database. In the case of EncryptedSharedPreferences the Android Keystore can be used to store the password. Other databases can rely on EncryptedSharedPreferences to store passwords. The password can also be provided dynamically by the user of the application or it can be fetched from a remote server if the other methods are not feasible.

Sensitive Code Example

For SQLiteDatabase:

SQLiteDatabase db = activity.openOrCreateDatabase("test.db", Context.MODE_PRIVATE, null); // Sensitive

For SharedPreferences:

SharedPreferences pref = activity.getPreferences(Context.MODE_PRIVATE); // Sensitive

For Realm:

RealmConfiguration config = new RealmConfiguration.Builder().build();
Realm realm = Realm.getInstance(config); // Sensitive

Compliant Solution

Instead of SQLiteDatabase you can use SQLCipher:

SQLiteDatabase db = SQLiteDatabase.openOrCreateDatabase("test.db", getKey(), null);

Instead of SharedPreferences you can use EncryptedSharedPreferences:

String masterKeyAlias = new MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC);
EncryptedSharedPreferences.create(
    "secret",
    masterKeyAlias,
    context,
    EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV,
    EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM
);

For Realm an encryption key can be specified in the config:

RealmConfiguration config = new RealmConfiguration.Builder()
    .encryptionKey(getKey())
    .build();
Realm realm = Realm.getInstance(config);

See

java:S6293

Android comes with Android KeyStore, a secure container for storing key materials. It’s possible to define certain keys to be unlocked when users authenticate using biometric credentials. This way, even if the application process is compromised, the attacker cannot access keys, as presence of the authorized user is required.

These keys can be used, to encrypt, sign or create a message authentication code (MAC) as proof that the authentication result has not been tampered with. This protection defeats the scenario where an attacker with physical access to the device would try to hook into the application process and call the onAuthenticationSucceeded method directly. Therefore he would be unable to extract the sensitive data or to perform the critical operations protected by the biometric authentication.

Ask Yourself Whether

The application contains:

  • Cryptographic keys / sensitive information that need to be protected using biometric authentication.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to tie the biometric authentication to a cryptographic operation by using a CryptoObject during authentication.

Sensitive Code Example

A CryptoObject is not used during authentication:

// ...
BiometricPrompt biometricPrompt = new BiometricPrompt(activity, executor, callback);
// ...
biometricPrompt.authenticate(promptInfo); // Noncompliant

Compliant Solution

A CryptoObject is used during authentication:

// ...
BiometricPrompt biometricPrompt = new BiometricPrompt(activity, executor, callback);
// ...
biometricPrompt.authenticate(promptInfo, new BiometricPrompt.CryptoObject(cipher)); // Compliant

See

java:S2068

Because it is easy to extract strings from an application source code or binary, passwords should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Passwords should be stored outside of the code in a configuration file, a database, or a password management service.

This rule flags instances of hard-coded passwords used in database and LDAP connections. It looks for hard-coded passwords in connection strings, and for variable names that match any of the patterns from the provided list.

Ask Yourself Whether

  • The password allows access to a sensitive component like a database, a file storage, an API, or a service.
  • The password is used in production environments.
  • Application re-distribution is required before updating the password.

There would be a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

String username = "steve";
String password = "blue";
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/test?" +
                  "user=" + username + "&password=" + password); // Sensitive

Compliant Solution

String username = getEncryptedUser();
String password = getEncryptedPassword();
Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/test?" +
                  "user=" + username + "&password=" + password);

See

java:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

These clients from Apache commons net libraries are based on unencrypted protocols and are not recommended:

TelnetClient telnet = new TelnetClient(); // Sensitive

FTPClient ftpClient = new FTPClient(); // Sensitive

SMTPClient smtpClient = new SMTPClient(); // Sensitive

Unencrypted HTTP connections, when using okhttp library for instance, should be avoided:

ConnectionSpec spec = new ConnectionSpec.Builder(ConnectionSpec.CLEARTEXT) // Sensitive
  .build();

Android WebView can be configured to allow a secure origin to load content from any other origin, even if that origin is insecure (mixed content):

import android.webkit.WebView

WebView webView = findViewById(R.id.webview)
webView.getSettings().setMixedContentMode(MIXED_CONTENT_ALWAYS_ALLOW); // Sensitive

Compliant Solution

Use instead these clients from Apache commons net and JSch/ssh library:

JSch jsch = new JSch();

if(implicit) {
  // implicit mode is considered deprecated but offer the same security than explicit mode
  FTPSClient ftpsClient = new FTPSClient(true);
}
else {
  FTPSClient ftpsClient = new FTPSClient();
}

if(implicit) {
  // implicit mode is considered deprecated but offer the same security than explicit mode
  SMTPSClient smtpsClient = new SMTPSClient(true);
}
else {
  SMTPSClient smtpsClient = new SMTPSClient();
  smtpsClient.connect("127.0.0.1", 25);
  if (smtpsClient.execTLS()) {
    // commands
  }
}

Perform HTTP encrypted connections, with okhttp library for instance:

ConnectionSpec spec = new ConnectionSpec.Builder(ConnectionSpec.MODERN_TLS)
  .build();

The most secure mode for Android WebView is MIXED_CONTENT_NEVER_ALLOW:

import android.webkit.WebView

WebView webView = findViewById(R.id.webview)
webView.getSettings().setMixedContentMode(MIXED_CONTENT_NEVER_ALLOW);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

java:S6300

Storing files locally is a common task for mobile applications. Files that are stored unencrypted can be read out and modified by an attacker with physical access to the device. Access to sensitive data can be harmful for the user of the application, for example when the device gets stolen.

Ask Yourself Whether

  • The file contains sensitive data that could cause harm when leaked.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to password-encrypt local files that contain sensitive information. The class EncryptedFile can be used to easily encrypt files.

Sensitive Code Example

Files.write(path, content); // Sensitive

FileOutputStream out = new FileOutputStream(file); // Sensitive

FileWriter fw = new FileWriter("outfilename", false); // Sensitive

Compliant Solution

String masterKeyAlias = MasterKeys.getOrCreate(MasterKeys.AES256_GCM_SPEC);

File file = new File(context.getFilesDir(), "secret_data");
EncryptedFile encryptedFile = EncryptedFile.Builder(
    file,
    context,
    masterKeyAlias,
    EncryptedFile.FileEncryptionScheme.AES256_GCM_HKDF_4KB
).build();

// write to the encrypted file
FileOutputStream encryptedOutputStream = encryptedFile.openFileOutput();

See

java:S6301

When storing local data in a mobile application, it is common to use a database that can be encrypted. When encryption of this database is enabled, the encryption key must be protected properly.

Why is this an issue?

Mobile applications often need to store data (which might be sensitive) locally. For Android, there exist several libraries that simplify this process by offering a feature-rich database system. SQLCipher and Realm are examples of such libraries. These libraries often add support for database encryption, to protect the contents from being read by other apps or by attackers.

When using encryption for such a database, it is important that the encryption key stays secret. If this key is hardcoded in the application, then it should be considered compromised. The key will be known by anyone with access to the application’s binary code or source code. This means that the sensitive encrypted data can be decrypted by anyone having access to the binary of the mobile application.

Furthermore, if the key is hardcoded, it is the same for every user. A compromise of this encryption key implicates every user of the app.

The encryption key is meant to stay secret and should not be hard-coded in the application as it would mean that:

What is the potential impact?

If an attacker is able to find the encryption key for the mobile database, this can potentially have severe consequences.

Theft of sensitive data

If a mobile database is encrypted, it is likely to contain data that is sensitive for the user or the app publisher. For example, it can contain personally identifiable information (PII), financial data, login credentials, or other sensitive user data.

By not protecting the encryption key properly, it becomes very easy for an attacker to recover it and then decrypt the mobile database. At that point, the theft of sensitive data might lead to identity theft, financial fraud, and other forms of malicious activities.

How to fix it in Realm

Code examples

In the example below, a local database is opened using a hardcoded key. To fix this, the key is moved to a secure location instead and retrieved using a getKey() method.

Noncompliant code example

String key = "gb09ym9ydoolp3w886d0tciczj6ve9kszqd65u7d126040gwy86xqimjpuuc788g";
RealmConfiguration config = new RealmConfiguration.Builder();
    .encryptionKey(key.toByteArray()) // Noncompliant
    .build();
Realm realm = Realm.getInstance(config);

Compliant solution

RealmConfiguration config = new RealmConfiguration.Builder()
    .encryptionKey(getKey())
    .build();
Realm realm = Realm.getInstance(config);

How does this work?

Using Android’s builtin key storage options

The Android Keystore system allows apps to store encryption keys in a container that is protected on a system level. Additionally, it can restrict when and how the keys are used. For example, it allows the app to require user authentication (for example using a fingerprint) before the key is made available. This is the recommended way to store cryptographic keys on Android.

Dynamically retrieving encryption keys remotely

As user devices are less trusted than controlled environments such as the application backend, the latter should be preferred for the storage of encryption keys. This requires that a user’s device has an internet connection, which may not be suitable for every use case.

Going the extra mile

Avoid storing sensitive data on user devices

In general, it is always preferable to store as little sensitive data on user devices as possible.

Of course, some sensitive data always has to be stored on client devices, such as the data required for authentication. In this case, consider whether the application logic can also function with a hash (or otherwise non-reversible form) of that data. For example, if an email address is required for authentication, it might be possible to use and store a hashed version of this address instead.

Resources

Documentation

Standards

java:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

With default limit value of 8388608 (8MB).

A 100 MB file is allowed to be uploaded:

@Bean(name = "multipartResolver")
public CommonsMultipartResolver multipartResolver() {
  CommonsMultipartResolver multipartResolver = new CommonsMultipartResolver();
  multipartResolver.setMaxUploadSize(104857600); // Sensitive (100MB)
  return multipartResolver;
}

@Bean(name = "multipartResolver")
public CommonsMultipartResolver multipartResolver() {
  CommonsMultipartResolver multipartResolver = new CommonsMultipartResolver(); // Sensitive, by default if maxUploadSize property is not defined, there is no limit and thus it's insecure
  return multipartResolver;
}

@Bean
public MultipartConfigElement multipartConfigElement() {
  MultipartConfigFactory factory = new MultipartConfigFactory(); // Sensitive, no limit by default
  return factory.createMultipartConfig();
}

Compliant Solution

File upload size is limited to 8 MB:

@Bean(name = "multipartResolver")
public CommonsMultipartResolver multipartResolver() {
  multipartResolver.setMaxUploadSize(8388608); // Compliant (8 MB)
  return multipartResolver;
}

See

java:S5344

The improper storage of passwords poses a significant security risk to software applications. This vulnerability arises when passwords are stored in plaintext or with a fast hashing algorithm. To exploit this vulnerability, an attacker typically requires access to the stored passwords.

Why is this an issue?

Attackers who would get access to the stored passwords could reuse them without further attacks or with little additional effort.
Obtaining the plaintext passwords, they could then gain unauthorized access to user accounts, potentially leading to various malicious activities.

What is the potential impact?

Plaintext or weakly hashed password storage poses a significant security risk to software applications.

Unauthorized Access

When passwords are stored in plaintext or with weak hashing algorithms, an attacker who gains access to the password database can easily retrieve and use the passwords to gain unauthorized access to user accounts. This can lead to various malicious activities, such as unauthorized data access, identity theft, or even financial fraud.

Credential Reuse

Many users tend to reuse passwords across multiple platforms. If an attacker obtains plaintext or weakly hashed passwords, they can potentially use these credentials to gain unauthorized access to other accounts held by the same user. This can have far-reaching consequences, as sensitive personal information or critical systems may be compromised.

Regulatory Compliance

Many industries and jurisdictions have specific regulations and standards to protect user data and ensure its confidentiality. Storing passwords in plaintext or with weak hashing algorithms can lead to non-compliance with these regulations, potentially resulting in legal consequences, financial penalties, and damage to the reputation of the software application and its developers.

How to fix it in Spring

Code examples

Noncompliant code example

The following code is vulnerable because it uses a legacy digest-based password encoding that is not considered secure.

@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth, DataSource dataSource) throws Exception {
  auth.jdbcAuthentication()
    .dataSource(dataSource)
    .usersByUsernameQuery("SELECT * FROM users WHERE username = ?")
    .passwordEncoder(new StandardPasswordEncoder()); // Noncompliant
}

Compliant solution

@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth, DataSource dataSource) throws Exception {
  auth.jdbcAuthentication()
    .dataSource(dataSource)
    .usersByUsernameQuery("SELECT * FROM users WHERE username = ?")
    .passwordEncoder(new BCryptPasswordEncoder());
}

How does this work?

Use secure password hashing algorithms

In general, you should rely on an algorithm that has no known security vulnerabilities. The MD5 and SHA-1 algorithms should not be used.

Some algorithms, such as the SHA family functions, are considered strong for some use cases, but are too fast in computation and therefore vulnerable to brute force attacks, especially with bruteforce-attack-oriented hardware.

To protect passwords, it is therefore important to choose modern, slow password-hashing algorithms. The following algorithms are, in order of strength, the most secure password hashing algorithms to date:

  1. Argon2
  2. scrypt
  3. bcrypt
  4. PBKDF2

Argon2 should be the best choice, and others should be used when the previous one is not available. For systems that must use FIPS-140-certified algorithms, PBKDF2 should be used.

Whenever possible, choose the strongest algorithm available. If the algorithm currently used by your system should be upgraded, OWASP documents possible upgrade methods here: Upgrading Legacy Hashes.

In the previous example, the BCryptPasswordEncoder is a password hashing function in Java that is designed to be secure and resistant to various types of attacks, including brute-force and rainbow table attacks. It is slow, adaptative, and automatically implements a salt.

Never store passwords in plaintext

A user password should never be stored in plaintext. Instead, a hash should be produced from it using a secure algorithm. When dealing with password storage security, best practices recommend relying on a slow hashing algorithm, that will make brute force attacks more difficult. Using a hashing function with adaptable computation and memory complexity also is recommended to be able to increase the security level with time.

Adding a salt to the digest computation is also recommended to prevent pre-computed table attacks (see rule S2053).

Pitfalls

Pre-hashing passwords

As bcrypt has a maximum length input length of 72 bytes for most implementations, some developers may be tempted to pre-hash the password with a stronger algorithm before hashing it with bcrypt.

Pre-hashing passwords with bcrypt is not recommended as it can lead to a specific range of issues. Using a strong salt and a high number of rounds is enough to protect the password.

More information about this can be found here: Pre-hashing Passwords with Bcrypt.

Resources

Documentation

Standards

java:S6432

When encrypting data using AES-GCM or AES-CCM, it is essential not to reuse the same initialization vector (IV, also called nonce) with a given key. To prevent this, it is recommended to either randomize the IV for each encryption or increment the IV after each encryption.

Why is this an issue?

When encrypting data using a counter (CTR) derived block cipher mode of operation, it is essential not to reuse the same initialization vector (IV) for a given key. An IV that complies with this requirement is called a "nonce" (number used once). Galois/Counter (GCM) and Counter with Cipher Block Chaining-Message Authentication Code (CCM) are both derived from counter mode.

When using AES-GCM or AES-CCM, a given key and IV pair will create a "keystream" that is used to encrypt a plaintext (original content) into a ciphertext (encrypted content.) For any key and IV pair, this keystream is always deterministic. Because of this property, encrypting several plaintexts with one key and IV pair can be catastrophic. If an attacker has access to one plaintext and its associated ciphertext, they are able to decrypt everything that was created using the same pair.

Additionally, IV reuse also drastically decreases the key recovery computational complexity by downgrading it to a simpler polynomial root-finding problem. This means that even without access to a plaintext/ciphertext pair, an attacker may still be able to decrypt all the sensitive data.

What is the potential impact?

If the encryption that is being used is flawed, attackers might be able to exploit it in several ways. They might be able to decrypt existing sensitive data or bypass key protections.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By not using the encryption algorithm correctly, the likelihood that an attacker might be able to recover the original sensitive data drastically increases.

Additional attack surface

Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them. If an attacker is able to modify the cleartext of the encrypted message, it might be possible to trigger other vulnerabilities in the code.

How to fix it in Java Cryptography Extension

Code examples

The example uses a hardcoded IV as a nonce, which causes AES-CCM to be insecure. To fix it, a nonce is randomly generated instead.

Noncompliant code example

public void encrypt(byte[] key, byte[] ptxt) {
    byte[] nonce = "7cVgr5cbdCZV".getBytes("UTF-8");

    Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding");
    SecretKeySpec keySpec = new SecretKeySpec(key, "AES");
    GCMParameterSpec gcmSpec = new GCMParameterSpec(128, nonce);

    cipher.init(Cipher.ENCRYPT_MODE, keySpec, gcmSpec); // Noncompliant
}

Compliant solution

public void encrypt(byte[] key, byte[] ptxt) {
    SecureRandom random = new SecureRandom();
    byte[] nonce = new byte[12];
    random.nextBytes(nonce);

    Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding");
    SecretKeySpec keySpec = new SecretKeySpec(key, "AES");
    GCMParameterSpec gcmSpec = new GCMParameterSpec(128, nonce);

    cipher.init(Cipher.ENCRYPT_MODE, keySpec, gcmSpec);
}

How does this work?

For AES-GCM and AES-CCM, NIST recommends generating a nonce using either a deterministic approach or using a 'Random Bit Generator (RBG)'.

Generating nonces using random number generation

When using a randomized approach, NIST recommends a nonce of at least 96 bits using a cryptographically secure pseudorandom number generator (CSPRNG.) Such a generator can create output with a sufficiently low probability of the same number being output twice (also called a collision) for a long time. However, after 232 generated numbers for the same key, NIST recommends rotating this key for a new one. After that amount of generated numbers, the probability of a collision is high enough to be considered insecure.

The code example above demonstrates how CSPRNGs can be used to generate nonces.

Be careful to use a random number generator that is sufficiently secure. Default (non-cryptographically secure) RNGs might be more prone to collisions in their output, which is catastrophic for counter-based encryption modes.

Deterministically generating nonces

One method to prevent the same IV from being used multiple times for the same key is to update the IV in a deterministic way after each encryption. The most straightforward deterministic method for this is a counter.

The way this works is simple: for any key, the first IV is the number zero. After this IV is used to encrypt something with a key, it is incremented for that key (and is now equal to 1). Although this requires additional bookkeeping, it should guarantee that for each encryption key, an IV is never repeated.

For a secure implementation, NIST suggests generating these nonces in two parts: a fixed field and an invocation field. The fixed field should be used to identify the device executing the encryption (for example, it could contain a device ID), such that for one key, no two devices can generate the same nonce. The invocation field contains the counter as described above. For a 96-bit nonce, NIST recommends (but does not require) using a 32-bit fixed field and a 64-bit invocation field. Additional details can be found in the NIST Special Publication 800-38D.

Resources

Standards

java:S6437

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Application’s security downgrade

A downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component.

For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

The following code example is noncompliant because it uses a hardcoded secret value.

Noncompliant code example

import org.h2.security.SHA256;

String inputString = "s3cr37";
byte[] key         = inputString.getBytes();

SHA256.getHMAC(key, message);  // Noncompliant

Compliant solution

import org.h2.security.SHA256;

String inputString = System.getenv("SECRET");
byte[] key         = inputString.getBytes();

SHA256.getHMAC(key, message);  // Noncompliant

How does this work?

While the noncompliant code example contains a hard-coded password, the compliant solution retrieves the secret’s value from its environment. This allows to have an environment-dependent secret value and avoids storing the password in the source code itself.

Depending on the application and its underlying infrastructure, how the secret gets added to the environment might change.

Resources

Documentation

Standards

java:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

public User getUser(Connection con, String user) throws SQLException {

  Statement stmt1 = null;
  Statement stmt2 = null;
  PreparedStatement pstmt;
  try {
    stmt1 = con.createStatement();
    ResultSet rs1 = stmt1.executeQuery("GETDATE()"); // No issue; hardcoded query

    stmt2 = con.createStatement();
    ResultSet rs2 = stmt2.executeQuery("select FNAME, LNAME, SSN " +
                 "from USERS where UNAME=" + user);  // Sensitive

    pstmt = con.prepareStatement("select FNAME, LNAME, SSN " +
                 "from USERS where UNAME=" + user);  // Sensitive
    ResultSet rs3 = pstmt.executeQuery();

    //...
}

public User getUserHibernate(org.hibernate.Session session, String data) {

  org.hibernate.Query query = session.createQuery(
            "FROM students where fname = " + data);  // Sensitive
  // ...
}

Compliant Solution

public User getUser(Connection con, String user) throws SQLException {

  Statement stmt1 = null;
  PreparedStatement pstmt = null;
  String query = "select FNAME, LNAME, SSN " +
                 "from USERS where UNAME=?"
  try {
    stmt1 = con.createStatement();
    ResultSet rs1 = stmt1.executeQuery("GETDATE()");

    pstmt = con.prepareStatement(query);
    pstmt.setString(1, user);  // Good; PreparedStatements escape their inputs.
    ResultSet rs2 = pstmt.executeQuery();

    //...
  }
}

public User getUserHibernate(org.hibernate.Session session, String data) {

  org.hibernate.Query query =  session.createQuery("FROM students where fname = ?");
  query = query.setParameter(0,data);  // Good; Parameter binding escapes all input

  org.hibernate.Query query2 =  session.createQuery("FROM students where fname = " + data); // Sensitive
  // ...

See

java:S4347

Cryptographic operations often rely on unpredictable random numbers to enhance security. These random numbers are created by cryptographically secure pseudo-random number generators (CSPRNG). It is important not to use a predictable seed with these random number generators otherwise the random numbers will also become predictable.

Why is this an issue?

Random number generators are often used to generate random values for cryptographic algorithms. When a random number generator is used for cryptographic purposes, the generated numbers must be as random and unpredictable as possible. When the random number generator is improperly seeded with a constant or a predictable value, its output will also be predictable.

This can have severe security implications for cryptographic operations that rely on the randomness of the generated numbers. By using a predictable seed, an attacker can potentially guess or deduce the generated numbers, compromising the security of whatever cryptographic algorithm relies on the random number generator.

What is the potential impact?

It is crucial to understand that the strength of cryptographic algorithms heavily relies on the quality of the random numbers used. By improperly seeding a CSPRNG, we introduce a significant weakness that can be exploited by attackers.

Insecure cryptographic keys

One of the primary use cases for CSPRNGs is generating cryptographic keys. If an attacker can predict the seed used to initialize the random number generator, they may be able to derive the same keys. Depending on the use case, this can lead to multiple severe outcomes, such as:

  • Being able to decrypt sensitive documents, leading to privacy breaches or identity theft.
  • Gaining access to a private key used for signing, allowing an attacker to forge digital signatures and impersonate legitimate entities.
  • Bypassing authentication mechanisms that rely on public-key infrastructure (PKI), which can be abused to gain unauthorized access to systems or networks.

Session hijacking and man-in-the-middle attack

Another scenario where this vulnerability can be exploited is in the generation of session tokens or nonces for secure communication protocols. If an attacker can predict the seed used to generate these tokens, they can impersonate legitimate users or intercept sensitive information.

How to fix it in Java SE

Code examples

The following code uses a cryptographically strong random number generator to generate data that is not cryptographically strong.

Noncompliant code example

SecureRandom sr = new SecureRandom();
sr.setSeed(123456L); // Noncompliant
int v = sr.next(32);
SecureRandom sr = new SecureRandom("abcdefghijklmnop".getBytes("us-ascii")); // Noncompliant
int v = sr.next(32);

Compliant solution

SecureRandom sr = new SecureRandom();
int v = sr.next(32);

This solution is available for JDK 1.8 and higher.

SecureRandom sr = SecureRandom.getInstanceStrong();
int v = sr.next(32);

How does this work?

When the randomly generated data needs to be cryptographically strong, SecureRandom is the correct class to use. However, its documentation also cites that "any seed material passed to a SecureRandom object must be unpredictable". When no seed is passed by the user to the object, the SecureRandom object chooses an unpredictable seed by default. Therefore, the easiest way to fix the issue is to use the default constructor without any calls to SecureObject.setSeed().

To go the extra mile, SecureObject.getInstanceStrong() returns an instance of SecureObject that is guaranteed to use a strong algorithm for its number generation.

If the randomly generated data is not used for cryptographic purposes and is not business critical, it may be a better choice to use java.util.Random instead. In this case, setting a predictable seed may be acceptable depending on the situation.

Resources

Documentation

Standards

java:S5679

The Security Assertion Markup Language (SAML) is a widely used standard in single sign-on systems. In a simplified version, the user authenticates to an Identity Provider which generates a signed SAML Response. This response is then forwarded to a Service Provider for validation and authentication.

Why is this an issue?

If the Service Provider does not manage to properly validate the incoming SAML response message signatures, attackers might be able to manipulate the response content without the application noticing. Especially, they might be able to alter the authentication-targeted user.

What is the potential impact?

By exploiting this vulnerability, an attacker can manipulate the SAML Response to impersonate a different user. This, in turn, can have various consequences on the application’s security.

Unauthorized Access

Exploiting this vulnerability allows an attacker with authenticated access to impersonate other users within the SAML-based SSO system. This can lead to unauthorized access to sensitive information, resources, or functionalities the attacker should not have. By masquerading as legitimate users, the attacker can bypass authentication mechanisms and gain unauthorized privileges, potentially compromising the entire system. By impersonating a user with higher privileges, the attacker can gain access to additional resources. Privilege escalation can lead to further compromise of other systems and unauthorized access to critical infrastructure.

Data Breaches

With the ability to impersonate other users, an attacker can gain access to sensitive data stored within the SAML-based SSO system. This includes personally identifiable information (PII), financial data, intellectual property, or any other confidential information. Data breaches can result in reputational damage, legal consequences, financial losses, and harm to individuals whose data is exposed.

How to fix it in Spring

Code examples

The following code examples are vulnerable because they explicitly include comments in signature checks. An attacker is able to change the field identifying the authenticated user with XML comments.

Noncompliant code example

import org.opensaml.xml.parse.StaticBasicParserPool;
import org.opensaml.xml.parse.ParserPool;

public ParserPool parserPool() {
  StaticBasicParserPool staticBasicParserPool = new StaticBasicParserPool();
  staticBasicParserPool.setIgnoreComments(false); // Noncompliant
  return staticBasicParserPool;
}
import org.opensaml.xml.parse.BasicParserPool;
import org.opensaml.xml.parse.ParserPool;

public ParserPool parserPool() {
  BasicParserPool basicParserPool = new BasicParserPool();
  basicParserPool.setIgnoreComments(false); // Noncompliant
  return basicParserPool;
}

Compliant solution

import org.opensaml.xml.parse.StaticBasicParserPool;
import org.opensaml.xml.parse.ParserPool;

public ParserPool parserPool() {
  return new StaticBasicParserPool();
}
import org.opensaml.xml.parse.BasicParserPool;
import org.opensaml.xml.parse.ParserPool;

public ParserPool parserPool() {
  return new BasicParserPool();
}

Resources

Documentation

Articles & blog posts

Standards

java:S5322

Android applications can receive broadcasts from the system or other applications. Receiving intents is security-sensitive. For example, it has led in the past to the following vulnerabilities:

Receivers can be declared in the manifest or in the code to make them context-specific. If the receiver is declared in the manifest Android will start the application if it is not already running once a matching broadcast is received. The receiver is an entry point into the application.

Other applications can send potentially malicious broadcasts, so it is important to consider broadcasts as untrusted and to limit the applications that can send broadcasts to the receiver.

Permissions can be specified to restrict broadcasts to authorized applications. Restrictions can be enforced by both the sender and receiver of a broadcast. If permissions are specified when registering a broadcast receiver, then only broadcasters who were granted this permission can send a message to the receiver.

This rule raises an issue when a receiver is registered without specifying any broadcast permission.

Ask Yourself Whether

  • The data extracted from intents is not sanitized.
  • Intents broadcast is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See the Android documentation for more information.

Sensitive Code Example

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.IntentFilter;
import android.os.Build;
import android.os.Handler;
import android.support.annotation.RequiresApi;

public class MyIntentReceiver {

    @RequiresApi(api = Build.VERSION_CODES.O)
    public void register(Context context, BroadcastReceiver receiver,
                         IntentFilter filter,
                         String broadcastPermission,
                         Handler scheduler,
                         int flags) {
        context.registerReceiver(receiver, filter); // Sensitive
        context.registerReceiver(receiver, filter, flags); // Sensitive

        // Broadcasting intent with "null" for broadcastPermission
        context.registerReceiver(receiver, filter, null, scheduler); // Sensitive
        context.registerReceiver(receiver, filter, null, scheduler, flags); // Sensitive
    }
}

Compliant Solution

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.IntentFilter;
import android.os.Build;
import android.os.Handler;
import android.support.annotation.RequiresApi;

public class MyIntentReceiver {

    @RequiresApi(api = Build.VERSION_CODES.O)
    public void register(Context context, BroadcastReceiver receiver,
                         IntentFilter filter,
                         String broadcastPermission,
                         Handler scheduler,
                         int flags) {

        context.registerReceiver(receiver, filter, broadcastPermission, scheduler);
        context.registerReceiver(receiver, filter, broadcastPermission, scheduler, flags);
    }
}

See

java:S5689

Disclosure of version information, usually overlooked by developers but disclosed by default by the systems and frameworks in use, can pose a significant security risk depending on the production environement.

Once this information is public, attackers can use it to identify potential security holes or vulnerabilities specific to that version.

Furthermore, if the published version information indicates the use of outdated or unsupported software, it becomes easier for attackers to exploit known vulnerabilities. They can search for published vulnerabilities related to that version and launch attacks that specifically target those vulnerabilities.

Ask Yourself Whether

  • Version information is accessible to end users.
  • Internal systems do not benefit from timely patch management workflows.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

In general, it is recommended to keep internal technical information within internal systems to control what attackers know about the underlying architectures. This is known as the "need to know" principle.

The most effective solution is to remove version information disclosure from what end users can see, such as the "x-powered-by" header.
This can be achieved directly through the web application code, server (nginx, apache) or firewalls.

Disabling the server signature provides additional protection by reducing the amount of information available to attackers. Note, however, that this does not provide as much protection as regular updates and patches.
Security by obscurity is the least foolproof solution of all. It should never be the only defense mechanism and should always be combined with other security measures.

Sensitive Code Example

@GetMapping(value = "/example")
public ResponseEntity<String> example() {
  HttpHeaders responseHeaders = new HttpHeaders();
  responseHeaders.set("x-powered-by", "myproduct"); // Sensitive

  return new ResponseEntity<String>(
      "example",
      responseHeaders,
      HttpStatus.CREATED);
}

Compliant Solution

Do not disclose version information unless necessary. The x-powered-by or Server HTTP headers should not be used.

See

java:S5324

Storing data locally is a common task for mobile applications. Such data includes files among other things. One convenient way to store files is to use the external file storage which usually offers a larger amount of disc space compared to internal storage.

Files created on the external storage are globally readable and writable. Therefore, a malicious application having the permissions WRITE_EXTERNAL_STORAGE or READ_EXTERNAL_STORAGE could try to read sensitive information from the files that other applications have stored on the external storage.

External storage can also be removed by the user (e.g when based on SD card) making the files unavailable to the application.

Ask Yourself Whether

Your application uses external storage to:

  • store files that contain sensitive data.
  • store files that are not meant to be shared with other application.
  • store files that are critical for the application to work.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use internal storage whenever possible as the system prevents other apps from accessing this location.
  • Only use external storage if you need to share non-sensitive files with other applications.
  • If your application has to use the external storage to store sensitive data, make sure it encrypts the files using EncryptedFile.
  • Data coming from external storage should always be considered untrusted and should be validated.
  • As some external storage can be removed, make sure to never store files on it that are critical for the usability of your application.

Sensitive Code Example

import android.content.Context;

public class AccessExternalFiles {

    public void accessFiles(Context context) {
        context.getExternalFilesDir(null); // Sensitive
    }
}

Compliant Solution

import android.content.Context;

public class AccessExternalFiles {

    public void accessFiles(Context context) {
        context.getFilesDir();
    }
}

See

java:S5443

Operating systems have global directories where any user has write access. Those folders are mostly used as temporary storage areas like /tmp in Linux based systems. An application manipulating files from these folders is exposed to race conditions on filenames: a malicious user can try to create a file with a predictable name before the application does. A successful attack can result in other files being accessed, modified, corrupted or deleted. This risk is even higher if the application runs with elevated permissions.

In the past, it has led to the following vulnerabilities:

This rule raises an issue whenever it detects a hard-coded path to a publicly writable directory like /tmp (see examples bellow). It also detects access to environment variables that point to publicly writable directories, e.g., TMP and TMPDIR.

  • /tmp
  • /var/tmp
  • /usr/tmp
  • /dev/shm
  • /dev/mqueue
  • /run/lock
  • /var/run/lock
  • /Library/Caches
  • /Users/Shared
  • /private/tmp
  • /private/var/tmp
  • \Windows\Temp
  • \Temp
  • \TMP

Ask Yourself Whether

  • Files are read from or written into a publicly writable folder
  • The application creates files with predictable names into a publicly writable folder

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a dedicated sub-folder with tightly controlled permissions
  • Use secure-by-design APIs to create temporary files. Such API will make sure:
    • The generated filename is unpredictable
    • The file is readable and writable only by the creating user ID
    • The file descriptor is not inherited by child processes
    • The file will be destroyed as soon as it is closed

Sensitive Code Example

new File("/tmp/myfile.txt"); // Sensitive
Paths.get("/tmp/myfile.txt"); // Sensitive

java.io.File.createTempFile("prefix", "suffix"); // Sensitive, will be in the default temporary-file directory.
java.nio.file.Files.createTempDirectory("prefix"); // Sensitive, will be in the default temporary-file directory.
Map<String, String> env = System.getenv();
env.get("TMP"); // Sensitive

Compliant Solution

new File("/myDirectory/myfile.txt");  // Compliant

File.createTempFile("prefix", "suffix", new File("/mySecureDirectory"));  // Compliant

if(SystemUtils.IS_OS_UNIX) {
  FileAttribute<Set<PosixFilePermission>> attr = PosixFilePermissions.asFileAttribute(PosixFilePermissions.fromString("rwx------"));
  Files.createTempFile("prefix", "suffix", attr); // Compliant
}
else {
  File f = Files.createTempFile("prefix", "suffix").toFile();  // Compliant
  f.setReadable(true, true);
  f.setWritable(true, true);
  f.setExecutable(true, true);
}

See

java:S5445

Temporary files are considered insecurely created when the file existence check is performed separately from the actual file creation. Such a situation can occur when creating temporary files using normal file handling functions or when using dedicated temporary file handling functions that are not atomic.

Why is this an issue?

Creating temporary files in a non-atomic way introduces race condition issues in the application’s behavior. Indeed, a third party can create a given file between when the application chooses its name and when it creates it.

In such a situation, the application might use a temporary file that it does not entirely control. In particular, this file’s permissions might be different than expected. This can lead to trust boundary issues.

What is the potential impact?

Attackers with control over a temporary file used by a vulnerable application will be able to modify it in a way that will affect the application’s logic. By changing this file’s Access Control List or other operating system-level properties, they could prevent the file from being deleted or emptied. They may also alter the file’s content before or while the application uses it.

Depending on why and how the affected temporary files are used, the exploitation of a race condition in an application can have various consequences. They can range from sensitive information disclosure to more serious application or hosting infrastructure compromise.

Information disclosure

Because attackers can control the permissions set on temporary files and prevent their removal, they can read what the application stores in them. This might be especially critical if this information is sensitive.

For example, an application might use temporary files to store users' session-related information. In such a case, attackers controlling those files can access session-stored information. This might allow them to take over authenticated users' identities and entitlements.

Attack surface extension

An application might use temporary files to store technical data for further reuse or as a communication channel between multiple components. In that case, it might consider those files part of the trust boundaries and use their content without additional security validation or sanitation. In such a case, an attacker controlling the file content might use it as an attack vector for further compromise.

For example, an application might store serialized data in temporary files for later use. In such a case, attackers controlling those files' content can change it in a way that will lead to an insecure deserialization exploitation. It might allow them to execute arbitrary code on the application hosting server and take it over.

How to fix it

Code examples

The following code example is vulnerable to a race condition attack because it creates a temporary file using an unsafe API function.

Noncompliant code example

import java.io.File;
import java.io.IOException;

protected void Example() throws IOException {
    File tempDir;
    tempDir = File.createTempFile("", ".");
    tempDir.delete();
    tempDir.mkdir();  // Noncompliant
}

Compliant solution

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;

protected void Example() throws IOException {
    Path tempPath = Files.createTempDirectory("");
    File tempDir = tempPath.toFile();
}

How does this work?

Applications should create temporary files so that no third party can read or modify their content. It requires that the files' name, location, and permissions are carefully chosen and set. This can be achieved in multiple ways depending on the applications' technology stacks.

Use a secure API function

Temporary files handling APIs generally provide secure functions to create temporary files. In most cases, they operate in an atomical way, creating and opening a file with a unique and unpredictable name in a single call. Those functions can often be used to replace less secure alternatives without requiring important development efforts.

Here, the example compliant code uses the safer Files.createTempDirectory function to manage the creation of temporary directories.

Strong security controls

Temporary files can be created using unsafe functions and API as long as strong security controls are applied. Non-temporary file-handling functions and APIs can also be used for that purpose.

In general, applications should ensure that attackers can not create a file before them. This turns into the following requirements when creating the files:

  • Files should be created in a non-public directory.
  • File names should be unique.
  • File names should be unpredictable. They should be generated using a cryptographically secure random generator.
  • File creation should fail if a target file already exists.

Moreover, when possible, it is recommended that applications destroy temporary files after they have finished using them.

Resources

Documentation

  • OWASP - Insecure Temporary File

Standards

java:S6418

Because it is easy to extract strings from an application source code or binary, secrets should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Secrets should be stored outside of the source code in a configuration file or a management service for secrets.

This rule detects variables/fields having a name matching a list of words (secret, token, credential, auth, api[_.-]?key) being assigned a pseudorandom hard-coded value. The pseudorandomness of the hard-coded value is based on its entropy and the probability to be human-readable. The randomness sensibility can be adjusted if needed. Lower values will detect less random values, raising potentially more false positives.

Ask Yourself Whether

  • The secret allows access to a sensitive component like a database, a file storage, an API, or a service.
  • The secret is used in a production environment.
  • Application re-distribution is required before updating the secret.

There would be a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the secret in a configuration file that is not pushed to the code repository.
  • Use your cloud provider’s service for managing secrets.
  • If a secret has been disclosed through the source code: revoke it and create a new one.

Sensitive Code Example

private static final String MY_SECRET = "47828a8dd77ee1eb9dde2d5e93cb221ce8c32b37";

public static void main(String[] args) {
  MyClass.callMyService(MY_SECRET);
}

Compliant Solution

Using AWS Secrets Manager:

import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueRequest;
import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse;

public static void main(String[] args) {
  SecretsManagerClient secretsClient = ...
  MyClass.doSomething(secretsClient, "MY_SERVICE_SECRET");
}

public static void doSomething(SecretsManagerClient secretsClient, String secretName) {
  GetSecretValueRequest valueRequest = GetSecretValueRequest.builder()
    .secretId(secretName)
    .build();

  GetSecretValueResponse valueResponse = secretsClient.getSecretValue(valueRequest);
  String secret = valueResponse.secretString();
  // do something with the secret
  MyClass.callMyService(secret);
}

Using Azure Key Vault Secret:

import com.azure.identity.DefaultAzureCredentialBuilder;

import com.azure.security.keyvault.secrets.SecretClient;
import com.azure.security.keyvault.secrets.SecretClientBuilder;
import com.azure.security.keyvault.secrets.models.KeyVaultSecret;

public static void main(String[] args) throws InterruptedException, IllegalArgumentException {
  String keyVaultName = System.getenv("KEY_VAULT_NAME");
  String keyVaultUri = "https://" + keyVaultName + ".vault.azure.net";

  SecretClient secretClient = new SecretClientBuilder()
    .vaultUrl(keyVaultUri)
    .credential(new DefaultAzureCredentialBuilder().build())
    .buildClient();

  MyClass.doSomething(secretClient, "MY_SERVICE_SECRET");
}

public static void doSomething(SecretClient secretClient, String secretName) {
  KeyVaultSecret retrievedSecret = secretClient.getSecret(secretName);
  String secret = retrievedSecret.getValue(),

  // do something with the secret
  MyClass.callMyService(secret);
}

See

java:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

Exceptions

To securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:

  • Argon2
  • PBKDF2
  • Scrypt
  • Bcrypt

When they are used for password storage, using a secure, random salt is required.

However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted.

How to fix it in Java SE

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

import javax.crypto.spec.PBEParameterSpec;

public void hash() {
    byte[] salt = "salty".getBytes();
    PBEParameterSpec cipherSpec = new PBEParameterSpec(salt, 10000); // Noncompliant
}

Compliant solution

import java.security.SecureRandom;
import javax.crypto.spec.PBEParameterSpec;

public void hash() {
    SecureRandom random = new SecureRandom();
    byte[] salt = new byte[16];
    random.nextBytes(salt);

    PBEParameterSpec cipherSpec = new PBEParameterSpec(salt, 10000);
}

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards.

Here, the compliant code example ensures the salt is random and has a sufficient length by calling the nextBytes method from the SecureRandom class with a salt buffer of 16 bytes. This class implements a cryptographically secure pseudo-random number generator.

Resources

Standards

java:S5320

In Android applications, broadcasting intents is security-sensitive. For example, it has led in the past to the following vulnerability:

By default, broadcasted intents are visible to every application, exposing all sensitive information they contain.

This rule raises an issue when an intent is broadcasted without specifying any "receiver permission".

Ask Yourself Whether

  • The intent contains sensitive information.
  • Intent reception is not restricted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Restrict the access to broadcasted intents. See Android documentation for more information.

Sensitive Code Example

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;
import android.os.UserHandle;
import android.support.annotation.RequiresApi;

public class MyIntentBroadcast {
    @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR1)
    public void broadcast(Intent intent, Context context, UserHandle user,
                          BroadcastReceiver resultReceiver, Handler scheduler, int initialCode,
                          String initialData, Bundle initialExtras,
                          String broadcastPermission) {
        context.sendBroadcast(intent); // Sensitive
        context.sendBroadcastAsUser(intent, user); // Sensitive

        // Broadcasting intent with "null" for receiverPermission
        context.sendBroadcast(intent, null); // Sensitive
        context.sendBroadcastAsUser(intent, user, null); // Sensitive
        context.sendOrderedBroadcast(intent, null); // Sensitive
        context.sendOrderedBroadcastAsUser(intent, user, null, resultReceiver,
                scheduler, initialCode, initialData, initialExtras); // Sensitive
    }
}

Compliant Solution

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.os.Build;
import android.os.Bundle;
import android.os.Handler;
import android.os.UserHandle;
import android.support.annotation.RequiresApi;

public class MyIntentBroadcast {
    @RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR1)
    public void broadcast(Intent intent, Context context, UserHandle user,
                          BroadcastReceiver resultReceiver, Handler scheduler, int initialCode,
                          String initialData, Bundle initialExtras,
                          String broadcastPermission) {

        context.sendBroadcast(intent, broadcastPermission);
        context.sendBroadcastAsUser(intent, user, broadcastPermission);
        context.sendOrderedBroadcast(intent, broadcastPermission);
        context.sendOrderedBroadcastAsUser(intent, user,broadcastPermission, resultReceiver,
                scheduler, initialCode, initialData, initialExtras);
    }
}

See

java:S4036

When executing an OS command and unless you specify the full path to the executable, then the locations in your application’s PATH environment variable will be searched for the executable. That search could leave an opening for an attacker if one of the elements in PATH is a directory under his control.

Ask Yourself Whether

  • The directories in the PATH environment variable may be defined by not trusted entities.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Fully qualified/absolute path should be used to specify the OS command to execute.

Sensitive Code Example

The full path of the command is not specified and thus the executable will be searched in all directories listed in the PATH environment variable:

Runtime.getRuntime().exec("make");  // Sensitive
Runtime.getRuntime().exec(new String[]{"make"});  // Sensitive

ProcessBuilder builder = new ProcessBuilder("make");  // Sensitive
builder.command("make");  // Sensitive

Compliant Solution

The command is defined by its full path:

Runtime.getRuntime().exec("/usr/bin/make");  // Compliant
Runtime.getRuntime().exec(new String[]{"~/bin/make"});  // Compliant

ProcessBuilder builder = new ProcessBuilder("./bin/make");  // Compliant
builder.command("../bin/make");  // Compliant
builder.command(Arrays.asList("..\bin\make", "-j8")); // Compliant

builder = new ProcessBuilder(Arrays.asList(".\make"));  // Compliant
builder.command(Arrays.asList("C:\bin\make", "-j8"));  // Compliant
builder.command(Arrays.asList("\\SERVER\bin\make"));  // Compliant

See

java:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

If you create a security-sensitive cookie in your JAVA code:

Cookie c = new Cookie(COOKIENAME, sensitivedata);
c.setSecure(false);  // Sensitive: a security-ensitive cookie is created with the secure flag set to false

By default the secure flag is set to false:

Cookie c = new Cookie(COOKIENAME, sensitivedata);  // Sensitive: a security-sensitive cookie is created with the secure flag not defined (by default set to false)

Compliant Solution

Cookie c = new Cookie(COOKIENAME, sensitivedata);
c.setSecure(true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag set to true

See

java:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

Java servlet framework:

@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
    resp.setHeader("Content-Type", "text/plain; charset=utf-8");
    resp.setHeader("Access-Control-Allow-Origin", "*"); // Sensitive
    resp.setHeader("Access-Control-Allow-Credentials", "true");
    resp.setHeader("Access-Control-Allow-Methods", "GET");
    resp.getWriter().write("response");
}

Spring MVC framework:

@CrossOrigin // Sensitive
@RequestMapping("")
public class TestController {
    public String home(ModelMap model) {
        model.addAttribute("message", "ok ");
        return "view";
    }
}
CorsConfiguration config = new CorsConfiguration();
config.addAllowedOrigin("*"); // Sensitive
config.applyPermitDefaultValues(); // Sensitive
class Insecure implements WebMvcConfigurer {
  @Override
  public void addCorsMappings(CorsRegistry registry) {
    registry.addMapping("/**")
      .allowedOrigins("*"); // Sensitive
  }
}

User-controlled origin:

public ResponseEntity<String> userControlledOrigin(@RequestHeader("Origin") String origin) {
  HttpHeaders responseHeaders = new HttpHeaders();
  responseHeaders.add("Access-Control-Allow-Origin", origin); // Sensitive

  return new ResponseEntity<>("content", responseHeaders, HttpStatus.CREATED);
}

Compliant Solution

Java Servlet framework:

@Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
    resp.setHeader("Content-Type", "text/plain; charset=utf-8");
    resp.setHeader("Access-Control-Allow-Origin", "trustedwebsite.com"); // Compliant
    resp.setHeader("Access-Control-Allow-Credentials", "true");
    resp.setHeader("Access-Control-Allow-Methods", "GET");
    resp.getWriter().write("response");
}

Spring MVC framework:

@CrossOrigin("trustedwebsite.com") // Compliant
@RequestMapping("")
public class TestController {
    public String home(ModelMap model) {
        model.addAttribute("message", "ok ");
        return "view";
    }
}
CorsConfiguration config = new CorsConfiguration();
config.addAllowedOrigin("http://domain2.com"); // Compliant
class Safe implements WebMvcConfigurer {
  @Override
  public void addCorsMappings(CorsRegistry registry) {
    registry.addMapping("/**")
      .allowedOrigins("safe.com"); // Compliant
  }
}

User-controlled origin validated with an allow-list:

public ResponseEntity<String> userControlledOrigin(@RequestHeader("Origin") String origin) {
  HttpHeaders responseHeaders = new HttpHeaders();
  if (trustedOrigins.contains(origin)) {
    responseHeaders.add("Access-Control-Allow-Origin", origin);
  }

  return new ResponseEntity<>("content", responseHeaders, HttpStatus.CREATED);
}

See

java:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

With JMustache by samskivert:

Mustache.compiler().escapeHTML(false).compile(template).execute(context); // Sensitive
Mustache.compiler().withEscaper(Escapers.NONE).compile(template).execute(context); // Sensitive

With Freemarker:

freemarker.template.Configuration configuration = new freemarker.template.Configuration();
configuration.setAutoEscapingPolicy(DISABLE_AUTO_ESCAPING_POLICY); // Sensitive

Compliant Solution

With JMustache by samskivert:

Mustache.compiler().compile(template).execute(context); // Compliant, auto-escaping is enabled by default
Mustache.compiler().escapeHTML(true).compile(template).execute(context); // Compliant

With Freemarker. See "setAutoEscapingPolicy" documentation for more details.

freemarker.template.Configuration configuration = new freemarker.template.Configuration();
configuration.setAutoEscapingPolicy(ENABLE_IF_DEFAULT_AUTO_ESCAPING_POLICY); // Compliant

See

docker:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in cURL

Code examples

Noncompliant code example

FROM ubuntu:22.04

# Noncompliant
RUN curl --tlsv1.0 -O https://tlsv1-0.example.com/downloads/install.sh

Compliant solution

FROM ubuntu:22.04

RUN curl --tlsv1.2 -O https://tlsv1-3.example.com/downloads/install.sh

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

docker:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against tampering or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

RUN curl http://www.example.com/

Compliant Solution

RUN curl https://www.example.com/

See

docker:S6469

For mounts types secret and ssh, Dockerfile’s RUN instruction supports a mode option for setting permissions. If you set this mode so that any user of the operating system can access the mount, it is vulnerable to leaks.

Why is this an issue?

Docker offers a feature to mount files and directories for specific RUN instructions when building Docker images. This feature can be used to provide secrets to commands that are executed during the build without baking them into the image. Additionally, it can be used to access SSH agents during the build.

The mode option is an octal value that allows you to specify the permissions for a particular file or directory. By default, on Docker, when mounting a secret, it is set to 0400.

For ssh, it is set by default to 0600:

  • The first digit 0 stands for special permissions (like setuid, setgid and sticky bit) and in this case means that no special permissions are set.
  • The following 6 (4+2 in octal format) means that the owner has read (4) and write (2) permissions
  • 00 means that the group and others have no permissions.

If the others bit is set to a value other than 0 at build-time, any other process can access it when the RUN command is executed: the secrets are vulnerable to supply chain attacks that aim to siphon secrets from containers.

What is the potential impact?

Unauthorized access

The unintended audience can exploit the leaked private key or equivalent to authenticate themselves as the legitimate owner, gaining unauthorized entry to systems, servers, or accounts that accept the key for authentication.

This unauthorized access opens the door for various malicious activities, including data breaches, unauthorized modifications, and misuse of sensitive information.

How to fix it

Code examples

Noncompliant code example

# Noncompliant
RUN --mount=type=secret,id=build_secret,mode=0777 ./installer.sh

Compliant solution

RUN --mount=type=secret,id=build_secret,mode=0700 ./installer.sh

How does this work?

In general, always follow the least privilege principle, and set the others bit to 0. By default, if mode is not set, permissions are safe.

In case you made this change because you need to access secrets or agents as a low-privileged user, you can use the options uid and gid to provide access without having to resort to world-readable or writable permissions that might expose them to unintended parties.

Resources

Documentation

Standards

docker:S6502

Disabling builder sandboxes can lead to unauthorized access of the host system by malicious programs.

By default, programs executed by a RUN statement use only a subset of capabilities which are considered safe: this is called sandbox mode.

If you disable the sandbox with the --security=insecure option, the executed command can use the full set of Linux capabilities.
This can lead to a container escape. For example, an attacker with the SYS_ADMIN capability is able to mount devices from the host system.

This vulnerability allows an attacker who controls the behavior of the ran command to access the host system, break out of the container and penetrate the infrastructure.

After a successful intrusion, the underlying systems are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of service

Ask Yourself Whether

  • The program is controlled by an external entity.
  • The program is part of a supply chain that could be a victim of a supply chain attack.

There is a risk if you answered yes to either of these questions.

Recommended Secure Coding Practices

  • Whenever possible, the sandbox should stay enabled to reduce unnecessary risk.
  • If elevated capabilities are absolutely necessary, make sure to verify the integrity of the program before executing it.

Sensitive Code Example

# syntax=docker/dockerfile:1-labs
FROM ubuntu:22.04
# Sensitive
RUN --security=insecure ./example.sh

Compliant Solution

# syntax=docker/dockerfile:1-labs
FROM ubuntu:22.04
RUN ./example.sh
RUN --security=sandbox ./example.sh

See

docker:S6504

Ownership or write permissions for a file or directory copied to the Docker image have been assigned to a user other than root.

Write permissions enable malicious actors, who have a foothold on the container, to tamper with the resource and thus potentially manipulate the container’s expected behavior.
Manipulating files could disrupt services or aid in escalating privileges inside the container.

This also breaches the container immutability principle as it facilitates container changes during its life. Immutability, a container best practice, allows for a more reliable and reproducible behavior of Docker containers.

If a user is given ownership on a file but no write permissions, the user can still modify it by using his ownership to change the file permissions first. This is why both ownership and write permissions should be avoided.

Ask Yourself Whether

  • A non-root user owns the resource.
  • A non-root user has been granted write permissions for the resource.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Use --chmod to change the permissions so that only root users can write to files.
  • Use --chown to change the file/directory owner to a root user.
  • Be mindful of the container immutability principle.

Sensitive Code Example

FROM example

RUN useradd exampleuser
# Sensitive
COPY --chown=exampleuser:exampleuser src.py dst.py

Compliant Solution

FROM example

COPY --chown=root:root --chmod=755 src.py dst.py

See

docker:S6505

When installing dependencies, package managers like npm will automatically execute shell scripts distributed along with the source code. Post-install scripts, for example, are a common way to execute malicious code at install time whenever a package is compromised.

Ask Yourself Whether

  • The execution of dependency installation scripts is required for the application to function correctly.

There is a risk if you answered no to the question.

Recommended Secure Coding Practices

Execution of third-party scripts should be disabled if not strictly necessary for dependencies to work correctly. Doing this will reduce the attack surface and block a well-known supply chain attack vector.

Commands that are subject to this issue are: npm install, yarn install and yarn (yarn without an explicit command will execute install).

Sensitive Code Example

FROM node:latest

# Sensitive
RUN npm install
FROM node:latest

# Sensitive
RUN yarn install

Compliant Solution

FROM node:latest

RUN npm install --ignore-scripts
FROM node:latest

RUN yarn install --ignore-scripts

See

docker:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

FROM example
# Sensitive
ENV APP_DEBUG=true
# Sensitive
ENV ENV=development
CMD /run.sh

Compliant Solution

FROM example
ENV APP_DEBUG=false
ENV ENV=production
CMD /run.sh

See

docker:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it

Code examples

The following code contains examples of disabled certificate validation.

Noncompliant code example

FROM ubuntu:22.04

# Noncompliant
RUN curl --insecure -O https://expired.example.com/downloads/install.sh

Compliant solution

FROM ubuntu:22.04

RUN curl -O https://new.example.com/downloads/install.sh

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

docker:S6437

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

In Dockerfiles, hard-coded secrets and secrets passed through as variables or created at build-time will cause security risks. The secret information can be exposed either via the container environment, the image metadata, or the build environment logs.

What is the potential impact?

The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Application’s security downgrade

A downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component.

For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements.

How to fix it

Best practices recommend using a secret vault for all secrets that must be accessed at container runtime. This will ensure the secret’s security and prevent any further unexpected disclosure. Depending on the development platform and the leaked secret type, multiple solutions are currently available.

For all secrets that must be accessed at image build time, it is recommended to rely on Docker Buildkit’s secret mount options. This will prevent secrets from being disclosed in image’s metadata and build logs.

Additionally, investigations and remediation actions should be conducted to ensure the current and future security of the infrastructure.

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Code examples

Noncompliant code example

The following code sample generates a new SSH private key that will be stored in the generated image. This key should be considered as compromised. Moreover, the SSH key encryption passphrase is also hardcoded.

FROM example

# Noncompliant
RUN ssh-keygen -N "passphrase" -t rsa -b 2048 -f /etc/ssh/rsa_key

RUN /example.sh --ssh /etc/ssh/rsa_key

The following code sample uses a seemingly hidden password which is actually leaked in the image metadata after the build.

FROM example
ARG PASSWORD

# Noncompliant
RUN wget --user=guest --password="$PASSWORD" https://example.com

Compliant solution

FROM example

RUN --mount=type=secret,id=ssh,target=/etc/ssh/rsa_key \
    /example.sh --ssh /etc/ssh/rsa_key
FROM example

RUN --mount=type=secret,id=wget,target=/home/user/.wgetrc \
    wget --user=guest https://example.com

For runtime secrets, best practices recommend relying on a vault service to pass secret information to the containers. Docker environment provides Swarm services that implement such a feature.

If such an option can not be considered, store the runtime secrets in an environment file such as .env and then start the container with the --env-file argument:

docker run --env-file .env myImage

It is then important to ensure that the environment files are securely stored and generated.

Resources

Documentation

Standards

docker:S6500

Installing recommended packages automatically can lead to vulnerabilities in the Docker image.

Potentially unnecessary packages are installed via a known Debian package manager. These packages will increase the attack surface of the created container as they might contain unidentified vulnerabilities or malicious code. Those packages could be used as part of a broader supply chain attack. In general, the more packages are installed in a container, the weaker its security posture is.
Depending on the introduced vulnerabilities, a malicious actor accessing such a container could use these for privilege escalation.
Removing unused packages can also significantly reduce your Docker image size.

To be secure, remove unused packages where possible and ensure images are subject to routine vulnerability scans.

Ask Yourself Whether

  • Container vulnerability scans are not performed.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

  • Avoid installing package dependencies that are not strictly required.

Sensitive Code Example

FROM ubuntu:22.04

# Sensitive
RUN apt install -y build-essential

# Sensitive
RUN apt-get install -y build-essential

# Sensitive
RUN aptitude install -y build-essential

Compliant Solution

FROM ubuntu:22.04

RUN apt --no-install-recommends install -y build-essential

RUN apt-get --no-install-recommends install -y build-essential

RUN aptitude --without-recommends install -y build-essential

See

docker:S6506

The usage of HTTPS is not enforced here. As it is possible for the HTTP client to follow redirects, such redirects might lead to websites using HTTP.

As HTTP is a clear-text protocol, it is considered insecure. Due to its lack of encryption, attackers that are able to sniff traffic from the network can read, modify, or corrupt the transported content. Therefore, allowing redirects to HTTP can lead to several risks:

  • Exposure of sensitive data
  • Malware-infected software updates or installers
  • Corruption of critical information

Even in isolated networks, such as segmented cloud or offline environments, it is important to ensure the usage of HTTPS. If not, then insider threats with access to these environments might still be able to monitor or tamper with communications.

Ask Yourself Whether

  • It is possible for the requested resource to be redirected to an insecure location in the future.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

  • Ensure that the HTTP client only accepts HTTPS pages. In curl this can be enabled using the option --proto "=https".
  • If it is not necessary to follow HTTP redirects, disable this in the HTTP client. In curl this is done by omitting the -L or --location option. In wget this is done by adding the option --max-redirect=0.

Sensitive Code Example

In the examples below, an install script is downloaded using curl or wget and then executed.

While connections made using HTTPS are generally considered secure, https://might-redirect.example.com/install.sh might redirect to a location that uses HTTP. Downloads made using HTTP are not secure and can be intercepted and modified. An attacker could modify the install script to run malicious code inside the container.

curl will not follow redirects unless either -L or --location option is used.

FROM ubuntu:22.04

# Sensitive
RUN curl --tlsv1.2 -sSf -L https://might-redirect.example.com/install.sh | sh

wget will follow redirects by default.

FROM ubuntu:22.04

# Sensitive
RUN wget --secure-protocol=TLSv1_2 -q -O - https://might-redirect.example.com/install.sh | sh

Compliant Solution

If you expect the server to redirect the download to a new location, curl can use the option --proto "=https" to ensure requests are only made using HTTPS. Any attempt to redirect to a location using HTTP will result in an error.

FROM ubuntu:22.04

RUN curl --proto "=https" --tlsv1.2 -sSf -L https://might-redirect.example.com/install.sh | sh

wget does not support this functionality so curl should be used instead.

If you expect the server to return the file without redirects, curl should not be instructed to follow redirects. Remove any -L or --location options from the command.

FROM ubuntu:22.04

RUN curl --tlsv1.2 -sSf https://might-redirect.example.com/install.sh | sh

wget uses the option --max-redirect=0 to disable redirects.

FROM ubuntu:22.04

RUN wget --secure-protocol=TLSv1_2 --max-redirect=0 -q -O - https://might-redirect.example.com/install.sh | sh

See

docker:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The container is designed to be a multi-user environment.
  • Services are run by dedicated low-privileged users to achieve privileges separation.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

To be secure, remove the unnecessary permissions. If required, use --chown to set the target user and group.

Sensitive Code Example

# Sensitive
ADD --chmod=777 src dst
# Sensitive
COPY --chmod=777 src dst
# Sensitive
RUN chmod +x resource
# Sensitive
RUN chmod u+s resource

Compliant Solution

ADD --chmod=754 src dst
COPY --chown=user:user --chmod=744 src dst
RUN chmod u+x resource
RUN chmod +t resource

See

docker:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

FROM ubuntu:22.04

# Sensitive
RUN echo "a40216e7c028e7d77f1aec22d2bbd5f9a357016f  go1.20.linux-amd64.tar.gz" | sha1sum -c
RUN tar -C /usr/local -xzf go1.20.linux-amd64.tar.gz
ENV PATH="$PATH:/usr/local/go/bin"

Compliant Solution

FROM ubuntu:22.04

RUN echo "5a9ebcc65c1cce56e0d2dc616aff4c4cedcfbda8cc6f0288cc08cda3b18dcbf1  go1.20.linux-amd64.tar.gz" | sha256sum -c
RUN tar -C /usr/local -xzf go1.20.linux-amd64.tar.gz
ENV PATH="$PATH:/usr/local/go/bin"

See

docker:S6431

Using host operating system namespaces can lead to compromise of the host system.
Opening network services of the local host system to the container creates a new attack surface for attackers.

Host network sharing could provide a significant performance advantage for workloads that require critical network performance. However, the successful exploitation of this attack vector could have a catastrophic impact on confidentiality within the host.

Ask Yourself Whether

  • The host exposes sensitive network services.
  • The container’s services performances do not rely on operating system namespaces.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not use host operating system namespaces.

Sensitive Code Example

# syntax=docker/dockerfile:1.3
FROM example
# Sensitive
RUN --network=host wget -O /home/sessions http://127.0.0.1:9000/sessions

Compliant Solution

# syntax=docker/dockerfile:1.3
FROM example
RUN --network=none wget -O /home/sessions http://127.0.0.1:9000/sessions

See

docker:S6472

Using ENV and ARG to handle secrets can lead to sensitive information being disclosed to an inappropriate sphere.

The ARG and ENV instructions in a Dockerfile are used to configure the image build and the container environment respectively. Both can be used at image build time, during the execution of commands in the container, and ENV can also be used at runtime.

In most cases, build-time and environment variables are used to propagate configuration items from the host to the image or container. A typical example for an environmental variable is the PATH variable, used to configure where system executables are searched for.

Using ARG and ENV to propagate configuration entries that contain secrets causes a security risk. Indeed, in most cases, artifacts of those values are kept in the final image. The secret information leak can happen either in the container environment itself, the image metadata or the build environment logs.

The concrete impact of such an issue highly depends on the secret’s purpose and the exposure sphere:

  • Financial impact if a paid service API key is disclosed and used.
  • Application compromise if an application’s secret, like a session signing key, is disclosed.
  • Infrastructure component takeover, if a system secret, like a remote access key, is leaked.

Ask Yourself Whether

  • The variable contains a value that should be kept confidential.
  • The container image or Dockerfile will be distributed to users who do not need to know the secret value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use Buildkit’s secret mount options when secrets have to be used at build time.
  • For run time secret variables, best practices would recommend only setting them at runtime, for example with the --env option of the docker run command.

Note that, in both cases, the files exposing the secrets should be securely stored and not exposed to a large sphere. In most cases, using a secret vault or another similar component should be preferred. For example, Docker Swarm provides a secrets service that can be used to handle most confidential data.

Sensitive Code Example

FROM example
# Sensitive
ARG ACCESS_TOKEN
# Sensitive
ENV ACCESS_TOKEN=${ACCESS_TOKEN}
CMD /run.sh

Compliant Solution

For build time secrets, use Buildkit’s secret mount type instead:

FROM example
RUN --mount=type=secret,id=build_secret ./installer.sh

For runtime secrets, leave the environment variables empty until runtime:

FROM example
ENV ACCESS_TOKEN=""
CMD /run.sh

Store the runtime secrets in an environment file (such as .env) and then start the container with the --env-file argument:

docker run --env-file .env myImage

See

docker:S6473

Exposing administration services can lead to unauthorized access to containers or escalation of privilege inside of containers.

A port that is commonly used for administration services is marked as being open through the EXPOSE command. Administration services like SSH might contain vulnerabilities, hard-coded credentials, or other security issues that increase the attack surface of a Docker deployment.
Even if the ports of the services do not get forwarded to the host system, by default they are reachable from other containers in the same network. A malicious actor that gets access to one container could use such services to escalate access and privileges.

Removing the EXPOSE command is not sufficient to be secure. The port is still open and the service accessible. To be secure, no administration services should be started. Instead, try to access the required information from the host system. For example, if the administration service is included to access logs or debug a service, you can do this from the host system instead. Docker allows you to read out any file that is inside of a container and to spawn a shell inside of a container if necessary.

Ask Yourself Whether

  • The container starts an administration service.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

  • Do not start SSH, VNC, RDP or similar administration services in containers.

Sensitive Code Example

FROM ubuntu:22.04
# Sensitive
EXPOSE 22
CMD ["/usr/sbin/sshd", "-f", "/etc/ssh/sshd_config", "-D"]

See

docker:S6497

This rule is deprecated; use S6596 instead.

A container image digest uniquely and immutably identifies a container image. A tag, on the other hand, is a mutable reference to a container image.

This tag can be updated to point to another version of the container at any point in time.
In general, the use of image digests instead of tags is intended to keep determinism stable within a system or infrastructure for reliability reasons.

The problem is that pulling such an image prevents the resulting container from being updated or patched in order to remove vulnerabilities or significant bugs.

Ask Yourself Whether

  • You expect to receive security updates of the base image.

There is a risk if you answer yes to this question.

Recommended Secure Coding Practices

Containers should get the latest security updates. If there is a need for determinism, the solution is to find tags that are not as prone to change as latest or shared tags.

To do so, favor a more precise tag that uses semantic versioning and target a major version, for example.

Sensitive Code Example

FROM mongo@sha256:8eb8f46e22f5ccf1feb7f0831d02032b187781b178cb971cd1222556a6cee9d1

RUN echo ls

Compliant Solution

Here, mongo:6.0 is better than using a digest, and better than using a more precise version, such as 6.0.4, because it would prevent 6.0.5 security updates:

FROM mongo:6.0

RUN echo ls

See

docker:S6470

When building a Docker image from a Dockerfile, a context directory is used and sent to the Docker daemon before the actual build starts. This context directory usually contains the Dockerfile itself, along with all the files that will be necessary for the build to succeed. This generally includes:

  • the source code of applications to set up in the container.
  • configuration files for other software components.
  • other necessary packages or components.

The COPY and ADD directives in the Dockerfiles are then used to actually copy content from the context directory to the image file system.

When COPY or ADD are used to recursively copy entire top-level directories or multiple items whose names are determined at build-time, unexpected files might get copied to the image filesystem. It could affect their confidentiality.

Ask Yourself Whether

  • The copied files and directories might contain sensitive data that should be kept confidential.
  • The context directory contains files and directories that have no functional purpose for the final container image.

There is a risk if you answered yes to any of those questions.

Keep in mind that the content of the context directory might change depending on the build environment and over time.

Recommended Secure Coding Practices

  • Limit the usage of globbing in the COPY and ADD sources definition.
  • Avoid copying the entire context directory to the image filesystem.
  • Prefer providing an explicit list of files and directories that are required for the image to properly run.

Sensitive Code Example

Copying the complete context directory:

FROM ubuntu:22.04
# Sensitive
COPY . .
CMD /run.sh

Copying multiple files and directories whose names are expanded at build time:

FROM ubuntu:22.04
# Sensitive
COPY ./example* /
COPY ./run.sh /
CMD /run.sh

Compliant Solution

FROM ubuntu:22.04
COPY ./example1 /example1
COPY ./example2 /example2
COPY ./run.sh /
CMD /run.sh

See

docker:S6471

Running containers as a privileged user weakens their runtime security, allowing any user whose code runs on the container to perform administrative actions.
In Linux containers, the privileged user is usually named root. In Windows containers, the equivalent is ContainerAdministrator.

A malicious user can run code on a system either thanks to actions that could be deemed legitimate - depending on internal business logic or operational management shells - or thanks to malicious actions. For example, with arbitrary code execution after exploiting a service that the container hosts.

Suppose the container is not hardened to prevent using a shell, interpreter, or Linux capabilities. In this case, the malicious user can read and exfiltrate any file (including Docker volumes), open new network connections, install malicious software, or, worse, break out of the container’s isolation context by exploiting other components.

This means giving the possibility to attackers to steal important infrastructure files, intellectual property, or personal data.

Depending on the infrastructure’s resilience, attackers may then extend their attack to other services, such as Kubernetes clusters or cloud providers, in order to maximize their reach.

Ask Yourself Whether

This container:

  • Serves services accessible from the Internet.
  • Does not require a privileged user to run.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

In the Dockerfile:

  • Create a new default user and use it with the USER statement.
    • Some container maintainers create a specific user to be used without explicitly setting it as default, such as postgresql or zookeeper. It is recommended to use these users instead of root.
    • On Windows containers, the ContainerUser is available for this purpose.

Or, at launch time:

  • Use the user argument when calling Docker or in the docker-compose file.
  • Add fine-grained Linux capabilities to perform specific actions that require root privileges.

If this image is already explicitly set to launch with a non-privileged user, you can add it to the safe images list rule property of your SonarQube instance, without the tag.

Sensitive Code Example

For any image that does not provide a user by default, regardless of their underlying operating system:

# Sensitive
FROM alpine

ENTRYPOINT ["id"]

For multi-stage builds, the last stage is non-compliant if it does not contain the USER instruction with a non-root user:

FROM alpine AS builder
COPY Makefile ./src /
RUN make build
USER nonroot

# Sensitive, previous user settings are dropped
FROM alpine AS runtime
COPY --from=builder bin/production /app
ENTRYPOINT ["/app/production"]

Compliant Solution

For Linux-based images and scratch-based images that untar a Linux distribution:

FROM alpine

RUN addgroup -S nonroot \
    && adduser -S nonroot -G nonroot

USER nonroot

ENTRYPOINT ["id"]

For Windows-based images, you can use ContainerUser or create a new user:

FROM mcr.microsoft.com/windows/servercore:ltsc2019

RUN net user /add nonroot

USER nonroot

For multi-stage builds, the non-root user should be on the last stage:

FROM alpine as builder
COPY Makefile ./src /
RUN make build

FROM alpine as runtime
RUN addgroup -S nonroot \
    && adduser -S nonroot -G nonroot
COPY --from=builder bin/production /app
USER nonroot
ENTRYPOINT ["/app/production"]

For images that use scratch as their base, it is not possible to add non-privileged users by default. To do this, add an additional build stage to add the group and user, and later copy /etc/passwd.

Here is an example that uses adduser in the first stage to generate a user and add it to the /etc/passwd file. In the next stage, this user is added by copying that file over from the previous stage:

FROM alpine:latest as security_provider
RUN addgroup -S nonroot \
    && adduser -S nonroot -G nonroot

FROM scratch as production
COPY --from=security_provider /etc/passwd /etc/passwd
USER nonroot
COPY production_binary /app
ENTRYPOINT ["/app/production_binary"]

See

scala:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

val ip = "192.168.12.42" // Sensitive
val socket = new Socket(ip, 6667)

Compliant Solution

val ips = Source.fromFile(configuration_file).getLines.toList // Compliant
val socket = new Socket(ips(0), 6667)

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

scala:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

See

kubernetes:S6428

Running containers in privileged mode can reduce the resilience of a cluster in the event of a security incident because it weakens the isolation between hosts and containers.

Process permissions in privileged containers are essentially the same as root permissions on the host. If these processes are not protected by robust security measures, an attacker who compromises a root process on a Pod’s host is likely to gain the ability to pivot within the cluster.
Depending on how resilient the cluster is, attackers can extend their attack to the cluster by compromising the nodes from which the cluster launched the process.

Ask Yourself Whether

  • The services of this Pod are accessible to people who are not administrators of the Kubernetes cluster.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Disable privileged mode.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
      securityContext:
        privileged: true # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
      securityContext:
        privileged: false

See

kubernetes:S6865

Why is this an issue?

Service account tokens are Kubernetes secrets created automatically to authenticate applications running inside pods to the API server. If a pod is compromised, an attacker could use this token to gain access to other resources in the cluster.

For example, they could create new pods, modify existing ones, or even delete critical system pods, depending on the permissions associated with the service account.

Therefore, it’s recommended to disable the automounting of service account tokens when it’s not necessary for the application running in the pod.

What is the potential impact?

Unauthorized Access

If a pod with a mounted service account gets compromised, an attacker could potentially use the token to interact with the Kubernetes API, possibly leading to unauthorized access to other resources in the cluster.

Privilege Escalation

Service account tokens are often bound with roles that have extensive permissions. If these tokens are exposed, it could lead to privilege escalation where an attacker gains higher-level permissions than intended.

Data Breach

Service account tokens can be used to access sensitive data stored in the Kubernetes cluster. If these tokens are compromised, it could lead to a data breach.

Denial of Service

An attacker with access to a service account token could potentially overload the Kubernetes API server by sending a large number of requests, leading to a Denial of Service (DoS) attack.

How to fix it

Code examples

Noncompliant code example

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec: # Noncompliant
  containers:
  - name: example-pod
    image: nginx:1.25.3

Compliant solution

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: example-pod
    image: nginx:1.25.3
  automountServiceAccountToken: false

How does this work?

The automounting of service account tokens can be disabled by setting automountServiceAccountToken: false in the pod’s specification. Additionally, it can be disabled in the configuration of an accompanied service account.

Resources

Documentation

Standards

kubernetes:S6867

Why is this an issue?

Using wildcards when defining Role-Based Access Control (RBAC) permissions in Kubernetes can lead to significant security issues. This is because it grants overly broad permissions, potentially allowing access to sensitive resources.

RBAC is designed to limit the access rights of users within the system by assigning roles to them. These roles define what actions a user can perform and on which resources. When a wildcard is used, it means that the role has access to all resources/verbs, bypassing the principle of least privilege. This principle states that users should have only the minimal permissions they need to perform their job function.

What is the potential impact?

If an attacker gains access to a role with wildcard permissions, they could potentially read, modify, or delete any resource in the Kubernetes cluster, leading to data breaches, service disruptions, or other malicious activities.

How to fix it

Code examples

Noncompliant code example

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: example-role
rules:
  - apiGroups: [""]
    resources: ["*"] # Noncompliant
    verbs: ["get", "list"]

Compliant solution

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: example-role
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "list"]

How does this work?

When defining RBAC permissions, it is important to follow the principle of least privilege. By explicitly specifying the verbs and resources a user should have access to instead of using wildcards, it can be ensured that users have only the permissions they need to perform their job function.

Resources

Documentation

Standards

kubernetes:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against tampering or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

apiVersion: batch/v1
kind: Job
metadata:
  name: curl
spec:
  template:
    spec:
      containers:
      - name: curl
        image: curlimages/curl
        command: ["curl"]
        args: ["http://example.com/"] # Sensitive

Compliant Solution

apiVersion: batch/v1
kind: Job
metadata:
  name: curl
spec:
  template:
    spec:
      containers:
      - name: curl
        image: curlimages/curl
        command: ["curl"]
        args: ["https://example.com/"]

See

kubernetes:S6429

Exposing Docker sockets can lead to compromise of the host systems.

The Docker daemon provides an API to access its functionality, for example through a UNIX domain socket. Mounting the Docker socket into a container allows the container to control the Docker daemon of the host system, resulting in full access over the whole system. A compromised or rogue container with access to the Docker socket could endanger the integrity of the whole Kubernetes cluster.

Ask Yourself Whether

  • The Pod is untrusted or might contain vulnerabilities.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to never add a Docker socket as a volume to a Pod.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /var/run/docker.sock
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /var/run/docker.sock # Sensitive
      type: Socket

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container

See

kubernetes:S6433

Mounting sensitive file system paths can lead to information disclosure and compromise of the host systems.

System paths can contain sensitive information like configuration files or cache files. Those might be used by attackers to expand permissions or to collect information for further attacks. System paths can also contain binaries and scripts that might be executed by the host system periodically. A compromised or rogue container with access to sensitive files could endanger the integrity of the whole Kubernetes cluster.

Ask Yourself Whether

  • The mounted file path contains sensitive information.
  • The mounted file path contains configuration files or executables that are writable.
  • The Pod is untrusted or might contain vulnerabilities.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to avoid mounting sensitive system file paths into containers. If it is necessary to mount such a path due to the architecture, the least privileges should be given, for instance by making the mount read-only to prevent unwanted modifications.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /data
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /etc # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /data
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      path: /mnt/nfs

See

kubernetes:S6864

Why is this an issue?

A memory limit is a configuration that sets the maximum amount of memory that a container can use. It is part of the resource management functionality of Kubernetes, which allows for the control and allocation of computational resources to containers.

When a memory limit is set for a container, Kubernetes ensures that the container does not exceed the specified limit. If a container tries to use more memory than its limit, the system will reclaim the excess memory, which could lead to termination of processes within the container.

Without a memory limit, a container can potentially consume all available memory on a node, which can lead to unpredictable behavior of the container or the node itself. Therefore, defining a memory limit for each container is a best practice in Kubernetes configurations. It helps in managing resources effectively and ensures that a single container does not monopolize the memory resources of a node.

What is the potential impact?

Denial of Service

Without a memory limit, a container can consume all available memory on a node. This could lead to a Denial of Service (DoS) condition where other containers on the same node are starved of memory. These containers may slow down, become unresponsive, or even crash, affecting the overall functionality and availability of applications running on them.

Inefficient Resource Allocation

When containers lack specified resource requests, the Kubernetes scheduler may not make optimal decisions about pod placement and resource contention management. This could result in the scheduler placing a resource-intensive pod on a node with insufficient resources, leading to performance issues or even node failure.

How to fix it

Code examples

To avoid potential issues specify a memory limit for each container.

Noncompliant code example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web # Noncompliant
      image: nginx

Compliant solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      resources:
        limits:
          memory: 100Mi

How does this work?

A limit can be set through the property resources.limits.memory of a container. Alternatively, a default limit for a namespace can be set with LimitRange.

Resources

Documentation

Standards

kubernetes:S6868

Why is this an issue?

Allowing command execution (exec) for roles in a Kubernetes cluster can pose a significant security risk. This is because it provides the user with the ability to execute arbitrary commands within a container, potentially leading to unauthorized access or data breaches.

In a production Kubernetes cluster, exec permissions are typically unnecessary due to the principle of least privilege, which suggests that a user or process should only have the minimum permissions necessary to perform its function. Additionally, containers in production are often treated as immutable infrastructure, meaning they should not be changed once deployed. Any changes should be made to the container image, which is then used to deploy a new container.

What is the potential impact?

Exploiting Vulnerabilities Within the Container

If a user or service has the ability to execute commands within a container, they could potentially identify and exploit vulnerabilities within the container’s software. This could include exploiting known vulnerabilities in outdated software versions, or finding and exploiting new vulnerabilities. This could lead to unauthorized access to the container, allowing the attacker to manipulate its operations or access its data.

Installing Malicious Software

Command execution permissions could also be used to install malicious software within a container. This could include malware, spyware, ransomware, or other types of harmful software. Once installed, this software could cause a wide range of issues, from data corruption or loss, to providing a backdoor for further attacks. It could also be used to create a botnet, using the compromised container to launch attacks on other systems.

Extracting Sensitive Data

If an attacker has the ability to execute commands within a container, they could potentially access and extract sensitive data. This could include user data, confidential business information, or other types of sensitive data. The extracted data could then be used for a wide range of malicious purposes, from identity theft to corporate espionage. This could lead to significant financial loss, damage to reputation, and potential legal consequences.

How to fix it

Code examples

Noncompliant code example

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: example-role
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["pods/exec"] # Noncompliant
    verbs: ["create"]

Compliant solution

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: example-role
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get"]

How does this work?

The exec permissions are set by allowing the create verb for the pods/exec resource. Removing this permission will prevent users and services from executing arbitrary commands within containers.

Resources

Documentation

Standards

kubernetes:S6869

Why is this an issue?

A CPU limitation for a container is a specified boundary or restriction that determines the maximum amount of CPU resources that a container can utilize. It is a part of resource management in a containerized environment, and it is set to ensure that a single container does not monopolize the CPU resources of the host machine.

CPU limitations are important for maintaining a balanced and efficient system. They help in distributing resources fairly among different containers, ensuring that no single container can cause a system-wide slowdown by consuming more than its fair share of CPU resources.

What is the potential impact?

Performance degradation

Without CPU limitations, a single container could monopolize all available CPU resources, leading to a system-wide slowdown. Other containers or processes on the same host might be deprived of the necessary CPU resources, causing them to function inefficiently.

System instability

In extreme cases, a container with no CPU limit could cause the host machine to become unresponsive. This can lead to system downtime and potential loss of data, disrupting critical operations and impacting system reliability.

How to fix it

Code examples

Noncompliant code example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web # Noncompliant
      image: nginx

Compliant solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      resources:
        limits:
          cpu: 0.5

How does this work?

A limit can be set through the property resources.limits.cpu of a container. Alternatively, a default limit for a namespace can be set with LimitRange.

Resources

Documentation

Standards

kubernetes:S5849

Setting capabilities can lead to privilege escalation and container escapes.

Linux capabilities allow you to assign narrow slices of root's permissions to processes. A thread with capabilities bypasses the normal kernel security checks to execute high-privilege actions such as mounting a device to a directory, without requiring additional root privileges.

In a container, capabilities might allow to access resources from the host system which can result in container escapes. For example, with the capability SYS_ADMIN an attacker might be able to mount devices from the host system inside of the container.

Ask Yourself Whether

Capabilities are granted:

  • To a process that does not require all capabilities to do its job.
  • To a not trusted process.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Capabilities are high privileges, traditionally associated with superuser (root), thus make sure that the most restrictive and necessary capabilities are assigned.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    securityContext:
      capabilities:
        add: ["SYS_ADMIN"] # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container

See

kubernetes:S6431

Using host operating system namespaces can lead to compromise of the host systems.
These attacks would target:

  • host processes
  • host inter-process communication (IPC) mechanisms
  • network services of the local host system

These three items likely include systems that support either the internal operation of the Kubernetes cluster or the enterprise’s internal infrastructure.

Opening these points to containers opens new attack surfaces for attackers who have already successfully exploited services exposed by containers. Depending on how resilient the cluster is, attackers can extend their attack to the cluster by compromising the nodes from which the cluster started the process.

Host network sharing could provide a significant performance advantage for workloads that require critical network performance. However, the successful exploitation of this attack vector could have a catastrophic impact on confidentiality within the cluster.

Ask Yourself Whether

  • The services of this Pod are accessible to people who are not administrators of the Kubernetes cluster.
  • The cluster’s services performances do not rely on operating system namespaces.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not use host operating system namespaces.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
  hostPID: true     # Sensitive
  hostIPC: true     # Sensitive
  hostNetwork: true # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
  hostPID: false
  hostIPC: false
  hostNetwork: false

See

kubernetes:S6473

Exposing administrative services can lead to unauthorized access to pods or escalation of privileges inside pods.

A port that is commonly used for administration services is open or marked as being open. Administration services like SSH might contain vulnerabilities, hard-coded credentials, or other security issues that increase the attack surface of a Kubernetes deployment. Even if the ports of the services do not get forwarded to the host system, by default they are reachable from other containers in the same network. A malicious actor that gets access to one container could use such services to escalate access and privileges.

If the administrative port is forwarded through a load balancer, then in most cases this port should be removed from the configuration to make sure it is not reachable externally. Setting the containerPort on a pod is purely informative. Therefore, removing the property is not sufficient to be secure. The port is still open and the service is still accessible.

In both cases, it is most secure to not start any administrative services in deployments. Instead, try to access the required information using Kubernetes’s own administrative tools. For example, to execute code inside a container, kubectl exec can be used. If the administration service is included to access logs, Kubernetes suggests using a sidecar container with a logging agent.

Ask Yourself Whether

  • The pod starts an administrative service.

There is a risk if you answered yes to the question.

Recommended Secure Coding Practices

  • Do not start SSH, VNC, RDP or similar administrative services in pods.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
    labels:
        app: example_app
spec:
  containers:
    - name: applications
      image: my_image_with_ssh
      ports:
      - containerPort: 22  # NonCompliant: Merely informative, removing this property does not
                           # close port 22.
apiVersion: apps/v1
kind: Service
metadata:
  name: example_lb
spec:
  type: LoadBalancer
  ports:
    - port: 8022
      targetPort: 22  # Compliant
  selector:
    app: example_app

See

kubernetes:S6870

Why is this an issue?

Ephemeral storage is a type of storage that is temporary and non-persistent, meaning it does not retain data once the process is terminated. In the context of Kubernetes, ephemeral storage is used for storing temporary files that a running container can write and read.

The issue at hand pertains to the creation of a container without any defined limits for this ephemeral storage. This means that the container can potentially consume as much ephemeral storage as is available on the node where it is running.

What is the potential impact?

Resource exhaustion

Without a defined limit, a container can consume all available ephemeral storage on a node. This can lead to resource exhaustion, where no more storage is available for other containers or processes running on the same node. This could cause these other containers or processes to fail or perform poorly.

Unpredictable application behavior

If a container exhausts the available ephemeral storage, it can lead to unpredictable application behavior. For instance, if an application attempts to write to the ephemeral storage and there is no space left, it may crash or exhibit other unexpected behaviors.

How to fix it

Code examples

Noncompliant code example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web # Noncompliant
      image: nginx
      volumeMounts:
        - name: ephemeral
          mountPath: "/tmp"

Compliant solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      resources:
        limits:
          ephemeral-storage: "2Gi"
      volumeMounts:
        - name: ephemeral
          mountPath: "/tmp"

How does this work?

A limit can be set through the property resources.limits.ephemeral-storage of a container. Alternatively, a default limit for a namespace can be set with LimitRange.

Resources

Documentation

Standards

kubernetes:S6430

Allowing process privilege escalations exposes the Pod to attacks that exploit setuid binaries.

This field directly controls whether the no_new_privs flag is set in the container process.
When this flag is enabled, binaries configured with setuid or setgid bits cannot change their runtime uid or gid: Potential attackers must rely on other privilege escalation techniques to successfully operate as root on the Pod.

Depending on how resilient the Kubernetes cluster and Pods are, attackers can extend their attack to the cluster by compromising the nodes from which the cluster started the Pod.

The allowPrivilegeEscalation field should not be set to true unless the Pod’s risks related to setuid or setgid bits have been mitigated.

Ask Yourself Whether

  • This Pod is accessible to people who are not administrators of the Kubernetes cluster.
  • This Pod contains binaries with setuid or setgid capabilities.

There is a risk if you answered yes to all of these questions.

Recommended Secure Coding Practices

Disable privilege escalation.

Sensitive Code Example

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
      securityContext:
        allowPrivilegeEscalation: true # Sensitive

Compliant Solution

apiVersion: v1
kind: Pod
metadata:
  name: example
spec:
  containers:
    - name: web
      image: nginx
      ports:
        - name: web
          containerPort: 80
          protocol: TCP
      securityContext:
        allowPrivilegeEscalation: false

See