SonarQube Vulnerability Report

Report Generated On
Friday, May 10, 2024
Project Name/URL
Application
Release
Branch
main
Delta Analysis
No

Summary of the Detected Vulnerabilities

Severity Number of Issues
HIGH 166
MEDIUM 28
LOW 64

Detail of the Detected Vulnerabilities

Rule Severity Component Line Description Message Status
secrets:S6706 HIGH lib/insecurity.ts 23 Cryptographic private keys should not be disclosed Make sure this private key gets revoked, changed, and removed from the code. OPEN
typescript:S2068 HIGH frontend/src/app/Services/two-factor-auth-service.spec.ts 64 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/Services/two-factor-auth-service.spec.ts 80 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/oauth/oauth.component.spec.ts 85 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/oauth/oauth.component.spec.ts 85 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/oauth/oauth.component.spec.ts 92 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 117 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 135 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 136 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 153 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH frontend/src/app/register/register.component.spec.ts 153 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/2faSpec.ts 169 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/2faSpec.ts 195 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/addressApiSpec.ts 20 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/basketApiSpec.ts 25 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/basketApiSpec.ts 101 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/basketItemApiSpec.ts 21 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 56 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 77 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 108 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 140 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 174 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 205 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 250 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 287 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/chatBotSpec.ts 295 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 22 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 49 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 78 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 113 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 153 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 195 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 235 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 283 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/dataExportApiSpec.ts 333 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deliveryApiSpec.ts 23 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deliveryApiSpec.ts 52 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deliveryApiSpec.ts 83 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deliveryApiSpec.ts 111 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 35 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 53 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 71 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 89 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 105 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 129 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 149 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 170 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/deluxeApiSpec.ts 191 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 18 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 37 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 64 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 80 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 99 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 119 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/erasureRequestApiSpec.ts 140 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/feedbackApiSpec.ts 119 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/feedbackApiSpec.ts 152 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 21 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 30 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 46 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 64 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 79 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 94 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 109 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 124 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 142 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 245 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/loginApiSpec.ts 266 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/memoryApiSpec.ts 26 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/memoryApiSpec.ts 64 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/memoryApiSpec.ts 91 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 19 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 56 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 73 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 90 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 109 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 129 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/orderHistoryApiSpec.ts 149 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/passwordApiSpec.ts 20 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/passwordApiSpec.ts 29 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/passwordApiSpec.ts 47 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/passwordApiSpec.ts 93 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/paymentApiSpec.ts 20 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/productReviewApiSpec.ts 111 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/productReviewApiSpec.ts 131 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/profileImageUploadSpec.ts 25 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/profileImageUploadSpec.ts 52 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/profileImageUploadSpec.ts 97 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/profileImageUploadSpec.ts 123 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 21 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 38 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 55 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 72 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 93 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 114 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 137 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 155 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 173 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 190 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 207 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 228 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 249 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 269 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 292 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 309 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/quantityApiSpec.ts 326 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/securityAnswerApiSpec.ts 44 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 42 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 60 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 82 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 100 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 106 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 118 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 136 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 158 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 180 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 199 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 260 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userApiSpec.ts 271 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/userProfileSpec.ts 19 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/api/walletApiSpec.ts 18 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/administration.spec.ts 5 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/b2bOrder.spec.ts 6 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/b2bOrder.spec.ts 37 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/basket.spec.ts 4 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/basket.spec.ts 76 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/changePassword.spec.ts 6 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/changePassword.spec.ts 25 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/changePassword.spec.ts 31 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/chatbot.spec.ts 3 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/complain.spec.ts 5 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/contact.spec.ts 11 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/contact.spec.ts 47 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/dataErasure.spec.ts 3 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/dataExport.spec.ts 24 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/deluxe.spec.ts 4 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/deluxe.spec.ts 21 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/noSql.spec.ts 8 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/noSql.spec.ts 53 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/noSql.spec.ts 76 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/noSql.spec.ts 120 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/profile.spec.ts 3 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 10 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 28 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 29 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 60 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 61 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/register.spec.ts 84 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/restApi.spec.ts 4 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/restApi.spec.ts 82 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/search.spec.ts 56 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/search.spec.ts 83 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/totpSetup.spec.ts 6 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
typescript:S2068 HIGH test/cypress/e2e/totpSetup.spec.ts 20 Hard-coded credentials are security-sensitive Review this potentially hardcoded credential. TO_REVIEW
docker:S6504 HIGH Dockerfile 47 Allowing non-root users to modify resources copied to an image is security-sensitive Make sure no write permissions are assigned to the executable. TO_REVIEW
docker:S6504 HIGH Dockerfile 48 Allowing non-root users to modify resources copied to an image is security-sensitive Make sure no write permissions are assigned to the executable. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/about/about.component.ts 84 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/administration/administration.component.ts 50 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/administration/administration.component.ts 65 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/data-export/data-export.component.ts 45 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/last-login-ip/last-login-ip.component.ts 36 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/score-board-legacy/score-board-legacy.component.ts 216 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/score-board/score-board.component.ts 71 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/search-result/search-result.component.ts 125 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/search-result/search-result.component.ts 151 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S6268 HIGH frontend/src/app/track-result/track-result.component.ts 41 Disabling Angular built-in sanitization is security-sensitive Make sure disabling Angular built-in sanitization is safe here. TO_REVIEW
typescript:S5852 MEDIUM frontend/src/app/change-password/change-password.component.ts 36 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM lib/codingChallenges.ts 66 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM lib/codingChallenges.ts 67 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM lib/startup/registerWebsocketEvents.ts 48 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM lib/utils.ts 216 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM routes/profileImageUrlUpload.ts 19 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM server.ts 227 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM test/api/metricsApiSpec.ts 17 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5852 MEDIUM test/cypress/support/commands.ts 36 Using slow regular expressions is security-sensitive Make sure the regex used here, which is vulnerable to super-linear runtime due to backtracking, cannot lead to denial of service. TO_REVIEW
typescript:S5693 MEDIUM server.ts 640 Allowing requests with excessive content length is security-sensitive Make sure the content length limit is safe here. TO_REVIEW
typescript:S5693 MEDIUM server.ts 646 Allowing requests with excessive content length is security-sensitive Make sure the content length limit is safe here. TO_REVIEW
typescript:S5693 MEDIUM server.ts 647 Allowing requests with excessive content length is security-sensitive Make sure the content length limit is safe here. TO_REVIEW
docker:S6471 MEDIUM test/smoke/Dockerfile 1 Running containers as a privileged user is security-sensitive The alpine image runs with root as the default user. Make sure it is safe here. TO_REVIEW
docker:S6470 MEDIUM Dockerfile 2 Recursively copying context directories is security-sensitive Copying recursively might inadvertently add sensitive data to the container. Make sure it is safe here. TO_REVIEW
typescript:S1523 MEDIUM routes/captcha.ts 23 Dynamically executing code is security-sensitive Make sure that this dynamic injection or execution of code is safe. TO_REVIEW
typescript:S1523 MEDIUM routes/userProfile.ts 36 Dynamically executing code is security-sensitive Make sure that this dynamic injection or execution of code is safe. TO_REVIEW
typescript:S1523 MEDIUM test/cypress/e2e/contact.spec.ts 258 Dynamically executing code is security-sensitive Make sure that this dynamic injection or execution of code is safe. TO_REVIEW
typescript:S2245 MEDIUM data/datacreator.ts 226 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM data/datacreator.ts 244 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM data/datacreator.ts 292 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM data/datacreator.ts 670 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM frontend/src/app/code-snippet/code-snippet.component.ts 146 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM lib/insecurity.ts 55 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 15 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 16 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 17 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 19 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
typescript:S2245 MEDIUM routes/captcha.ts 20 Using pseudorandom number generators (PRNGs) is security-sensitive Make sure that using this pseudorandom number generator is safe here. TO_REVIEW
docker:S5332 LOW test/smoke/Dockerfile 7 Using clear-text protocols is security-sensitive Make sure that using clear-text protocols is safe here. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_1.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_1.ts 7 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_1.ts 9 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_2.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_2.ts 7 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_2.ts 9 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_3.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_3.ts 7 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_3.ts 9 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_4_correct.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_4_correct.ts 7 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectChallenge_4_correct.ts 9 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_1.ts 5 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_1.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_1.ts 8 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_2.ts 5 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_2.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_2.ts 8 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_3_correct.ts 3 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_3_correct.ts 4 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_3_correct.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_4.ts 5 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_4.ts 6 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW data/static/codefixes/redirectCryptoCurrencyChallenge_4.ts 8 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW frontend/src/app/order-completion/order-completion.component.spec.ts 136 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW frontend/src/app/score-board-legacy/score-board-legacy.component.spec.ts 281 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW frontend/src/app/score-board-legacy/score-board-legacy.component.spec.ts 290 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW lib/insecurity.ts 135 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW lib/insecurity.ts 136 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW lib/insecurity.ts 138 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/cypress/e2e/profile.spec.ts 74 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/cypress/e2e/profile.spec.ts 107 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/redirectSpec.ts 42 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/redirectSpec.ts 78 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/utilsSpec.ts 36 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/utilsSpec.ts 40 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 86 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 95 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 104 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 113 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 123 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 132 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S5332 LOW test/server/verifySpec.ts 141 Using clear-text protocols is security-sensitive Using http protocol is insecure. Use https instead. TO_REVIEW
typescript:S4507 LOW server.ts 634 Delivering code in production with debug features activated is security-sensitive Make sure this debug feature is deactivated before delivering the code in production. TO_REVIEW
typescript:S5122 LOW server.ts 164 Having a permissive Cross-Origin Resource Sharing policy is security-sensitive Make sure that enabling CORS is safe here. TO_REVIEW
typescript:S5122 LOW server.ts 165 Having a permissive Cross-Origin Resource Sharing policy is security-sensitive Make sure that enabling CORS is safe here. TO_REVIEW
typescript:S1313 LOW test/api/loginApiSpec.ts 253 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 1.2.3.4 is safe here. TO_REVIEW
typescript:S1313 LOW test/api/loginApiSpec.ts 257 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 1.2.3.4 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 14 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 2001:0db8:85a3:0000:0000:8a2e:0370:7334 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 14 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 2001:0db8:85a3:0000:0000:8a2e:0370:7334 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 18 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 0:0:0:0:0:ffff:7f00:1 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 18 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address 0:0:0:0:0:ffff:7f00:1 is safe here. TO_REVIEW
typescript:S1313 LOW test/server/utilsSpec.ts 26 Using hardcoded IP addresses is security-sensitive Make sure using a hardcoded IP address ::ffff:192.0.2.128 is safe here. TO_REVIEW
typescript:S4790 LOW lib/insecurity.ts 43 Using weak hashing algorithms is security-sensitive Make sure this weak hash algorithm is not used in a sensitive context here. TO_REVIEW
Web:S5725 LOW frontend/src/index.html 15 Using remote artifacts without integrity checks is security-sensitive Make sure not using resource integrity feature is safe here. TO_REVIEW
Web:S5725 LOW frontend/src/index.html 16 Using remote artifacts without integrity checks is security-sensitive Make sure not using resource integrity feature is safe here. TO_REVIEW
docker:S6500 LOW Dockerfile 25 Automatically installing recommended packages is security-sensitive Make sure automatically installing recommended packages is safe here. TO_REVIEW
javascript:S4790 LOW Gruntfile.js 76 Using weak hashing algorithms is security-sensitive Make sure this weak hash algorithm is not used in a sensitive context here. TO_REVIEW
docker:S6505 LOW Dockerfile 4 Allowing shell scripts execution during package installation is security-sensitive Omitting --ignore-scripts can lead to the execution of shell scripts. Make sure it is safe here. TO_REVIEW
docker:S6505 LOW Dockerfile 5 Allowing shell scripts execution during package installation is security-sensitive Omitting --ignore-scripts can lead to the execution of shell scripts. Make sure it is safe here. TO_REVIEW
docker:S6505 LOW Dockerfile 19 Allowing shell scripts execution during package installation is security-sensitive Omitting --ignore-scripts can lead to the execution of shell scripts. Make sure it is safe here. TO_REVIEW
Web:S5148 LOW frontend/src/app/nft-unlock/nft-unlock.component.html 63 Authorizing an opened window to access back to the originating window is security-sensitive Make sure not using rel="noopener" is safe here. TO_REVIEW
Web:S5148 LOW frontend/src/app/nft-unlock/nft-unlock.component.html 80 Authorizing an opened window to access back to the originating window is security-sensitive Make sure not using rel="noopener" is safe here. TO_REVIEW

Known Security Rules

Rule Description
azureresourcemanager:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Databases

Code examples

The following code samples are equivalent For Azure Database for MySQL servers, Azure Database for PostgreSQL servers, and Azure Database for MariaDB servers.

For all of these, there is no minimal TLS version enforced by default.

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "name": "example",
      "properties": {
        "minimalTlsVersion": "TLS1_0"
      }
    }
  ]
}
resource mysqlDbServer 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  name: 'example'
  properties: {
    minimalTlsVersion: 'TLS1_0' // Noncompliant
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "name": "example",
      "properties": {
        "minimalTlsVersion": "TLS1_2"
      }
    }
  ]
}
resource mysqlDbServer 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  name: 'example'
  properties: {
    minimalTlsVersion: 'TLS1_2'
  }
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

azureresourcemanager:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

Using publicNetworkAccess to control access to resources:

resource exampleSite 'Microsoft.Web/sites@2020-12-01' = {
  name: 'example-site'
  properties: {
    publicNetworkAccess: 'Enabled'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example-site",
      "properties": {
        "siteConfig": {
          "publicNetworkAccess": "Enabled"
        }
      }
    }
  ]
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example",
      "resources": [
        {
          "type": "config",
          "apiVersion": "2020-12-01",
          "name": "example-config",
          "properties": {
            "publicNetworkAccess": "Enabled"
          }
        }
      ]
    }
  ]
}

Using IP address ranges to control access to resources:

resource exampleFirewall 'Microsoft.Sql/servers/firewallRules@2014-04-01' = {
  name: 'example-firewall'
  properties: {
    startIpAddress: '0.0.0.0'
    endIpAddress: '255.255.255.255'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers/firewallRules",
      "apiVersion": "2014-04-01",
      "name": "example-firewall",
      "properties": {
        "startIpAddress": "0.0.0.0",
        "endIpAddress": "255.255.255.255"
      }
    }
  ]
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers",
      "apiVersion": "2014-04-01",
      "name": "example-database",
      "resources": [
        {
          "type": "firewallRules",
          "apiVersion": "2014-04-01",
          "name": "example-firewall",
          "properties": {
            "startIpAddress": "0.0.0.0",
            "endIpAddress": "255.255.255.255"
          }
        }
      ]
    }
  ]
}

Compliant Solution

Using publicNetworkAccess to control access to resources:

resource exampleSite 'Microsoft.Web/sites@2020-12-01' = {
  name: 'example-site'
  properties: {
    publicNetworkAccess: 'Disabled'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example-site",
      "properties": {
        "siteConfig": {
          "publicNetworkAccess": "Disabled"
        }
      }
    }
  ]
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2020-12-01",
      "name": "example-site",
      "resources": [
        {
          "type": "config",
          "apiVersion": "2020-12-01",
          "name": "example-config",
          "properties": {
            "publicNetworkAccess": "Disabled"
          }
        }
      ]
    }
  ]
}

Using IP address ranges to control access to resources:

resource exampleFirewall 'Microsoft.Sql/servers/firewallRules@2014-04-01' = {
  name: 'example-firewall'
  properties: {
    startIpAddress: '192.168.0.0'
    endIpAddress: '192.168.255.255'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers/firewallRules",
      "apiVersion": "2014-04-01",
      "name": "example-firewall",
      "properties": {
        "startIpAddress": "192.168.0.0",
        "endIpAddress": "192.168.255.255"
      }
    }
  ]
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Sql/servers",
      "apiVersion": "2014-04-01",
      "name": "example-database",
      "resources": [
        {
          "type": "firewallRules",
          "apiVersion": "2014-04-01",
          "name": "example-firewall",
          "properties": {
            "startIpAddress": "192.168.0.0",
            "endIpAddress": "192.168.255.255"
          }
        }
      ]
    }
  ]
}

See

azureresourcemanager:S6378

Disabling Managed Identities can reduce an organization’s ability to protect itself against configuration faults and credential leaks.

Authenticating via managed identities to an Azure resource solely relies on an API call with a non-secret token. The process is inner to Azure: secrets used by Azure are not even accessible to end-users.

In typical scenarios without managed identities, the use of credentials can lead to mistakenly leaving them in code bases. In addition, configuration faults may also happen when storing these values or assigning them permissions.

By transparently taking care of the Azure Active Directory authentication, Managed Identities allow getting rid of day-to-day credentials management.

Ask Yourself Whether

The resource:

  • Needs to authenticate to Azure resources that support Azure Active Directory (AAD).
  • Uses a different Access Control system that doesn’t guarantee the same security controls as AAD, or no Access Control system at all.

There is a risk if you answered yes to all of those questions.

Recommended Secure Coding Practices

Enable the Managed Identities capabilities of this Azure resource. If supported, use a System-Assigned managed identity, as:

  • It cannot be shared across resources.
  • Its life cycle is deeply tied to the life cycle of its Azure resource.
  • It provides a unique independent identity.

Alternatively, User-Assigned Managed Identities can also be used but don’t guarantee the properties listed above.

Sensitive Code Example

Using ARM templates:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "apiManagementService"
        }
    ]
}

Using Bicep:

resource sensitiveApiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = {
  name: 'apiManagementService'
  // Sensitive: no Managed Identity is defined
}

Compliant Solution

Using ARM templates:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "apiManagementService",
            "identity": {
                "type": "SystemAssigned"
            }
        }
    ]
}

Using Bicep:

resource sensitiveApiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = {
  name: 'apiManagementService'
  identity: {
    type: 'SystemAssigned'
  }
}

See

azureresourcemanager:S6648

Azure Resource Manager templates define parameters as a way to reuse templates in different environments. Secure parameters (secure strings and secure objects) should not be assigned a default value.

Why is this an issue?

Parameters with the type securestring and secureObject are designed to pass sensitive data to the resources being deployed. Unlike other data types, they cannot be accessed after the deployment is completed. They can neither be logged nor used as an output.

Secure parameters can be assigned a default value which will be used if the parameter is not supplied. This default value is not protected and is stored in cleartext in the deployment history.

What is the potential impact?

If the default value contains a secret, it will be disclosed to all accounts that have read access to the deployment history.

How to fix it in ARM templates

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "secretValue": {
      "type": "securestring",
      "defaultValue": "S3CR3T"
    }
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "secretValue": {
      "type": "securestring"
    }
  }
}

Resources

Documentation

Standards

azureresourcemanager:S6656

When using nested deployments in Azure, template expressions can be evaluated within the scope of the parent template or the scope of the nested template. If such a template expression evaluates a secure value of the parent template, it is possible to expose this value in the deployment history.

Why is this an issue?

Parameters with the type securestring and secureObject are designed to pass sensitive data to the resources being deployed. Secure parameters cannot be accessed after the deployment is completed: they can neither be logged nor used as an output.

When used in nested deployments, however, it is possible to embed secure parameters in such a way they can be visible afterward.

What is the potential impact?

If the nested deployment contains a secure parameter in this way, then the value of this parameter may be readable in the deployment history. This can lead to important credentials being leaked to unauthorized accounts.

How to fix it in ARM Templates

By setting properties.expressionEvaluationOptions.scope to Inner in the parent template, template evaluations are limited to the scope of the nested template. This makes it impossible to expose secure parameters defined in the parent template.

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "adminUsername": {
      "type": "securestring",
      "defaultValue": "[newGuid()]"
    }
  },
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Resources/deployments",
      "apiVersion": "2022-09-01",
      "properties": {
        "mode": "Incremental",
        "template": {
          "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
          "contentVersion": "1.0.0.0",
          "resources": [
            {
              "name": "example",
              "type": "Microsoft.Compute/virtualMachines",
              "apiVersion": "2022-11-01",
              "properties": {
                "osProfile": {
                  "adminUsername": "[parameters('adminUsername')]"
                }
              }
            }
          ]
        }
      }
    }
  ]
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Resources/deployments",
      "apiVersion": "2022-09-01",
      "properties": {
        "expressionEvaluationOptions": {
          "scope": "Inner"
        },
        "mode": "Incremental",
        "template": {
          "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
          "contentVersion": "1.0.0.0",
          "parameters": {
            "adminUsername": {
              "type": "securestring",
              "defaultValue": "[newGuid()]"
            }
          },
          "resources": [
            {
              "name": "example",
              "type": "Microsoft.Compute/virtualMachines",
              "apiVersion": "2022-11-01",
              "properties": {
                "osProfile": {
                  "adminUsername": "[parameters('adminUsername')]"
                }
              }
            }
          ]
        }
      }
    }
  ]
}

Resources

Documentation

Standards

azureresourcemanager:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against tampering or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

For Microsoft.Web/sites:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "name": "example",
      "apiVersion": "2022-09-01",
      "properties": {
        "httpsOnly": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites@2022-03-01' = {
  properties: {
    httpsOnly: false // Sensitive
  }
}

For Microsoft.Web/sites/config:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites/config",
      "name": "sites/example",
      "apiVersion": "2022-09-01",
      "properties": {
        "ftpsState": "AllAllowed"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites/config@2022-09-01' = {
  properties: {
    ftpsState: 'AllAllowed' // Sensitive
  }
}

For Microsoft.Storage/storageAccounts:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "name": "example",
      "apiVersion": "2022-09-01",
      "properties": {
        "supportsHttpsTrafficOnly": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    supportsHttpsTrafficOnly: false // Sensitive
  }
}

For Microsoft.ApiManagement/service/apis:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ApiManagement/service/apis",
      "name": "service/example",
      "apiVersion": "2022-08-01",
      "properties": {
        "protocols": ["http"]
      }
    }
  ]
}
resource symbolicname 'Microsoft.ApiManagement/service/apis@2022-08-01' = {
  properties: {
    protocols: ['http'] // Sensitive
  }
}

For Microsoft.Cdn/profiles/endpoints:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cdn/profiles/endpoints",
      "name": "profiles/example",
      "apiVersion": "2021-06-01",
      "properties": {
        "isHttpAllowed": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cdn/profiles/endpoints@2021-06-01' = {
  properties: {
    isHttpAllowed: true // Sensitive
  }
}

For Microsoft.Cache/redisEnterprise/databases:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cache/redisEnterprise/databases",
      "name": "redisEnterprise/example",
      "apiVersion": "2022-01-01",
      "properties": {
        "clientProtocol": "Plaintext"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cache/redisEnterprise/databases@2022-01-01' = {
  properties: {
    clientProtocol: 'Plaintext' // Sensitive
  }
}

For Microsoft.DBforMySQL/servers, Microsoft.DBforMariaDB/servers, and Microsoft.DBforPostgreSQL/servers:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "name": "example",
      "apiVersion": "2017-12-01",
      "properties": {
        "sslEnforcement": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    sslEnforcement: 'Disabled' // Sensitive
  }
}

Compliant Solution

For Microsoft.Web/sites:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "name": "example",
      "apiVersion": "2022-09-01",
      "properties": {
        "httpsOnly": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites@2022-03-01' = {
  properties: {
    httpsOnly: true
  }
}

For Microsoft.Web/sites/config:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites/config",
      "name": "sites/example",
      "apiVersion": "2022-09-01",
      "properties": {
        "ftpsState": "FtpsOnly"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Web/sites/config@2022-09-01' = {
  properties: {
    ftpsState: 'FtpsOnly'
  }
}

For Microsoft.Storage/storageAccounts:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "name": "example",
      "apiVersion": "2022-09-01",
      "properties": {
        "supportsHttpsTrafficOnly": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    supportsHttpsTrafficOnly: true
  }
}

For Microsoft.ApiManagement/service/apis:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ApiManagement/service/apis",
      "name": "service/example",
      "apiVersion": "2022-08-01",
      "properties": {
        "protocols": ["https"]
      }
    }
  ]
}
resource symbolicname 'Microsoft.ApiManagement/service/apis@2022-08-01' = {
  properties: {
    protocols: ['https']
  }
}

For Microsoft.Cdn/profiles/endpoints:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cdn/profiles/endpoints",
      "name": "profiles/example",
      "apiVersion": "2021-06-01",
      "properties": {
        "isHttpAllowed": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cdn/profiles/endpoints@2021-06-01' = {
  properties: {
    isHttpAllowed: false
  }
}

For Microsoft.Cache/redisEnterprise/databases:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Cache/redisEnterprise/databases",
      "name": "redisEnterprise/example",
      "apiVersion": "2022-01-01",
      "properties": {
        "clientProtocol": "Encrypted"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Cache/redisEnterprise/databases@2022-01-01' = {
  properties: {
    clientProtocol: 'Encrypted'
  }
}

For Microsoft.DBforMySQL/servers, Microsoft.DBforMariaDB/servers, and Microsoft.DBforPostgreSQL/servers:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DBforMySQL/servers",
      "name": "example",
      "apiVersion": "2017-12-01",
      "properties": {
        "sslEnforcement": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    sslEnforcement: 'Enabled'
  }
}

See

azureresourcemanager:S6388

Using unencrypted cloud storages can lead to data exposure. In the case that adversaries gain physical access to the storage medium they are able to access unencrypted information.

Ask Yourself Whether

  • The service contains sensitive information that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt cloud storages that contain sensitive information.

Sensitive Code Example

For Microsoft.AzureArcData/sqlServerInstances/databases:

Disabled encryption on SQL service instance database:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "databases/example",
      "type": "Microsoft.AzureArcData/sqlServerInstances/databases",
      "apiVersion": "2023-03-15-preview",
      "properties": {
        "databaseOptions": {
          "isEncrypted": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.AzureArcData/sqlServerInstances/databases@2023-03-15-preview' = {
  properties: {
    databaseOptions: {
      isEncrypted: false
    }
  }
}

For Microsoft.Compute/disks, encryption is disabled by default.

For Microsoft.Compute/snapshots:

Disabled disk encryption with settings collection:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryptionSettingsCollection": {
          "enabled": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = {
  properties: {
    encryptionSettingsCollection: {
      enabled: false
    }
  }
}

For Microsoft.Compute/virtualMachines:

Disabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "securityProfile": {
          "encryptionAtHost": false
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    securityProfile: {
      encryptionAtHost: false
    }
  }
}

Disabled encryption for managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "dataDisks": [
            {
              "id": "myDiskId"
            }
          ]
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      dataDisks: [
        {
          name: 'myDisk'
        }
      ]
    }
  }
}

Disabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "encryptionSettings": {
              "enabled": false
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        encryptionSettings: {
          enabled: false
        }
      }
    }
  }
}

Disabled encryption for OS managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "managedDisk": {
              "id": "myDiskId"
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        managedDisk: {
          id: 'myDiskId'
        }
      }
    }
  }
}

For Microsoft.Compute/virtualMachineScaleSets:

Disabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "securityProfile": {
            "encryptionAtHost": false
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      securityProfile: {
        encryptionAtHost: false
      }
    }
  }
}

Disabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "dataDisks": [
              {
                "name": "myDataDisk"
              }
            ]
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        dataDisks: [
          {
            name: 'myDataDisk'
          }
        ]
      }
    }
  }
}

Disabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "name": "myOsDisk"
            }
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        osDisk: {
          name: 'myOsDisk'
        }
      }
    }
  }
}

For Microsoft.ContainerService/managedClusters:

Disabled encryption at host and set the disk encryption set ID:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-02-preview",
      "properties": {
        "agentPoolProfiles": [
          {
            "enableEncryptionAtHost": false
          }
        ]
      }
    }
  ]
}
resource symbolicname 'Microsoft.ContainerService/managedClusters@2023-03-02-preview' = {
  properties: {
    agentPoolProfiles: [
      {
        enableEncryptionAtHost: false
      }
    ]
  }
}

For Microsoft.DataLakeStore/accounts:

Disabled encryption for Data Lake Store:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DataLakeStore/accounts",
      "apiVersion": "2016-11-01",
      "properties": {
        "encryptionState": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DataLakeStore/accounts@2016-11-01' = {
  properties: {
    encryptionState: 'Disabled'
  }
}

For Microsoft.DBforMySQL/servers:

Disabled infrastructure double encryption for MySQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    infrastructureEncryption: 'Disabled'
  }
}

For Microsoft.DBforPostgreSQL/servers:

Disabled infrastructure double encryption for PostgreSQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DBforPostgreSQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforPostgreSQL/servers@2017-12-01' = {
  properties: {
    infrastructureEncryption: 'Disabled'
  }
}

For Microsoft.DocumentDB/cassandraClusters/dataCenters:

Disabled encryption for a Cassandra Cluster datacenter’s managed disk and backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "cassandraClusters/example",
      "type": "Microsoft.DocumentDB/cassandraClusters/dataCenters",
      "apiVersion": "2023-04-15",
      "properties": {
        "diskCapacity": 4
      }
    }
  ]
}
resource symbolicname 'Microsoft.DocumentDB/cassandraClusters/dataCenters@2023-04-15' = {
  name: 'string'
  parent: parent
  properties: {
    diskCapacity: 4
  }
}

For Microsoft.HDInsight/clusters:

Disabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": false
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = {
  properties: {
    computeProfile: {
      roles: [
        {
          encryptDataDisks: false
        }
      ]
    }
  }
}

Disabled encryption for data disk at application level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "clusters/example",
      "type": "Microsoft.HDInsight/clusters/applications",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": false
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters/applications@2021-06-01' = {
  properties: {
    computeProfile: {
      roles: [
        {
          encryptDataDisks: false
        }
      ]
    }
  }
}

Disabled encryption for resource disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "diskEncryptionProperties": {
          "encryptionAtHost": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = {
  properties: {
    diskEncryptionProperties: {
      encryptionAtHost: false
    }
  }
}

For Microsoft.Kusto/clusters:

Disabled encryption for disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Kusto/clusters",
      "apiVersion": "2022-12-29",
      "properties": {
        "enableDiskEncryption": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Kusto/clusters@2022-12-29' = {
  properties: {
    enableDiskEncryption: false
  }
}

For Microsoft.RecoveryServices/vaults:

Disabled encryption for disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "properties": {
        "encryption": {
          "infrastructureEncryption": "Disabled"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.RecoveryServices/vaults@2023-01-01' = {
  properties: {
    encryption: {
      infrastructureEncryption: 'Disabled'
    }
  }
}

Disabled encryption on infastructure for backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "vaults/example",
      "type": "Microsoft.RecoveryServices/vaults/backupEncryptionConfigs",
      "apiVersion": "2023-01-01",
      "properties": {
        "infrastructureEncryptionState": "Disabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.RecoveryServices/vaults/backupEncryptionConfigs@2023-01-01' = {
  properties: {
    encryptionAtRestType: '{CustomerManaged | MicrosoftManaged}'
    infrastructureEncryptionState: 'Disabled'
  }
}

For Microsoft.RedHatOpenShift/openShiftClusters:

Disabled disk encryption for master profile and worker profiles:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.RedHatOpenShift/openShiftClusters",
      "apiVersion": "2022-09-04",
      "properties": {
        "masterProfile": {
          "encryptionAtHost": "Disabled"
        },
        "workerProfiles": [
          {
            "encryptionAtHost": "Disabled"
          }
        ]
      }
    }
  ]
}
resource symbolicname 'Microsoft.RedHatOpenShift/openShiftClusters@2022-09-04' = {
  properties: {
    masterProfile: {
      encryptionAtHost: 'Disabled'
    }
    workerProfiles: [
      {
        encryptionAtHost: 'Disabled'
      }
    ]
  }
}

For Microsoft.SqlVirtualMachine/sqlVirtualMachines:

Disabled encryption for SQL Virtual Machine:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines",
      "apiVersion": "2022-08-01-preview",
      "properties": {
        "autoBackupSettings": {
          "enableEncryption": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.SqlVirtualMachine/sqlVirtualMachines@2022-08-01-preview' = {
  properties: {
    autoBackupSettings: {
      enableEncryption: false
    }
  }
}

For Microsoft.Storage/storageAccounts:

Disabled enforcing of infrastructure encryption for double encryption of data:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2022-09-01",
      "properties": {
        "encryption": {
          "requireInfrastructureEncryption": false
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    encryption: {
      requireInfrastructureEncryption: false
    }
  }
}

For Microsoft.Storage/storageAccounts/encryptionScopes:

Disabled enforcing of infrastructure encryption for double encryption of data at encryption scope level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "storageAccounts/example",
      "type": "Microsoft.Storage/storageAccounts/encryptionScopes",
      "apiVersion": "2022-09-01",
      "properties": {
        "requireInfrastructureEncryption": false
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts/encryptionScopes@2022-09-01' = {
  properties: {
    requireInfrastructureEncryption: false
  }
}

Compliant Solution

For Microsoft.AzureArcData/sqlServerInstances/databases:

Enabled encryption on SQL service instance database:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "databases/example",
      "type": "Microsoft.AzureArcData/sqlServerInstances/databases",
      "apiVersion": "2023-03-15-preview",
      "properties": {
        "databaseOptions": {
          "isEncrypted": true
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.AzureArcData/sqlServerInstances/databases@2023-03-15-preview' = {
  properties: {
    databaseOptions: {
      isEncrypted: true
    }
  }
}

For Microsoft.Compute/disks:

Enabled encryption for managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/disks",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryption": {
          "diskEncryptionSetId": "string",
          "type": "string"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/disks@2022-07-02' = {
  properties: {
    encryption: {
      diskEncryptionSetId: 'string'
      type: 'string'
    }
  }
}

Enabled encryption through setting encryptionSettingsCollection:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/disks",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryptionSettingsCollection": {
          "enabled": true,
          "encryptionSettings": [
            {
              "diskEncryptionKey": {
                "secretUrl": "string",
                "sourceVault": {
                  "id": "string"
                }
              }
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/disks@2022-07-02' = {
  properties: {
    encryptionSettingsCollection: {
      enabled: true
      encryptionSettings: [
        {
          diskEncryptionKey: {
            secretUrl: 'string'
            sourceVault: {
              id: 'string'
            }
          }
        }
      ]
    }
  }
}

Enabled encryption through a security profile for an OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Compute/disks",
      "apiVersion": "2022-07-02",
      "properties": {
        "securityProfile": {
          "secureVMDiskEncryptionSetId": "string",
          "securityType": "{'ConfidentialVM_DiskEncryptedWithCustomerKey' | 'ConfidentialVM_DiskEncryptedWithPlatformKey' | 'ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey' | 'TrustedLaunch'}"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/disks@2022-07-02' = {
  properties: {
    securityProfile: {
      secureVMDiskEncryptionSetId: 'string'
      securityType: '{ConfidentialVM_DiskEncryptedWithCustomerKey | ConfidentialVM_DiskEncryptedWithPlatformKey | ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey | TrustedLaunch}'
    }
  }
}

For Microsoft.Compute/snapshots:

Enabled disk encryption for snapshot:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryption": {
          "diskEncryptionSetId": "string",
          "type": "{'EncryptionAtRestWithCustomerKey' | 'EncryptionAtRestWithPlatformAndCustomerKeys' | 'EncryptionAtRestWithPlatformKey'}"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = {
  properties: {
    encryption: {
      diskEncryptionSetId: 'string'
      type: '{EncryptionAtRestWithCustomerKey | EncryptionAtRestWithPlatformAndCustomerKeys | EncryptionAtRestWithPlatformKey}'
    }
  }
}

Enabled disk encryption with settings collection:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "encryptionSettingsCollection": {
          "enabled": true,
          "encryptionSettings": [
            {
              "diskEncryptionKey": {
                "secretUrl": "",
                "sourceVault": {
                  "id": "string"
                }
              }
            }
          ],
          "encryptionSettingsVersion": "{'1.0' | '1.1'}"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = {
  properties: {
    encryptionSettingsCollection: {
      enabled: true
      encryptionSettings: [
        {
          diskEncryptionKey: {
            secretUrl: ''
            sourceVault: {
              id: 'string'
            }
          }
        }
      ]
      encryptionSettingsVersion: '{1.0 | 1.1}'
    }
  }
}

Enabled disk encryption through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/snapshots",
      "apiVersion": "2022-07-02",
      "properties": {
        "securityProfile": {
          "secureVMDiskEncryptionSetId": "string",
          "securityType": "{'ConfidentialVM_DiskEncryptedWithCustomerKey' | 'ConfidentialVM_DiskEncryptedWithPlatformKey' | 'ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey' |'TrustedLaunch'}"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/snapshots@2022-07-02' = {
  properties: {
    securityProfile: {
      secureVMDiskEncryptionSetId: 'string'
      securityType: '{ConfidentialVM_DiskEncryptedWithCustomerKey | ConfidentialVM_DiskEncryptedWithPlatformKey | ConfidentialVM_VMGuestStateOnlyEncryptedWithPlatformKey | TrustedLaunch}'
    }
  }
}

For Microsoft.Compute/virtualMachines:

Enabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "securityProfile": {
          "encryptionAtHost": true
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    securityProfile: {
      encryptionAtHost: true
    }
  }
}

Enabled encryption for managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "dataDisks": [
            {
              "id": "myDiskId",
              "managedDisk": {
                "diskEncryptionSet": {
                  "id": "string"
                }
              }
            }
          ]
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      dataDisks: [
        {
          name: 'myDisk'
          managedDisk: {
            diskEncryptionSet: {
              id: 'string'
            }
          }
        }
      ]
    }
  }
}

Enabled encryption for managed disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "dataDisks": [
            {
              "id": "myDiskId",
              "managedDisk": {
                "securityProfile": {
                  "diskEncryptionSet": {
                    "id": "string"
                  }
                }
              }
            }
          ]
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      dataDisks: [
        {
          name: 'myDisk'
          managedDisk: {
            securityProfile: {
              diskEncryptionSet: {
                id: 'string'
              }
            }
          }
        }
      ]
    }
  }
}

Enabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "encryptionSettings": {
              "enabled": true,
              "diskEncryptionKey": {
                "secretUrl": "string",
                "sourceVault": {
                  "id": "string"
                }
              }
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        encryptionSettings: {
          enabled: true
          diskEncryptionKey: {
            secretUrl: 'string'
            sourceVault: {
              id: 'string'
            }
          }
        }
      }
    }
  }
}

Enabled encryption for OS managed disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "managedDisk": {
              "id": "myDiskId",
              "diskEncryptionSet": {
                "id": "string"
              }
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        managedDisk: {
          id: 'myDiskId'
          diskEncryptionSet: {
            id: 'string'
          }
        }
      }
    }
  }
}

Enabled encryption for OS managed disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "2022-11-01",
      "properties": {
        "storageProfile": {
          "osDisk": {
            "managedDisk": {
              "securityProfile": {
                "diskEncryptionSet": {
                  "id": "string"
                }
              }
            }
          }
        }
      }
    }
  ]
}
resource myName 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  properties: {
    storageProfile: {
      osDisk: {
        name: 'myDisk'
        managedDisk: {
          id: 'myDiskId'
          securityProfile: {
            diskEncryptionSet: {
              id: 'string'
            }
          }
        }
      }
    }
  }
}

For Microsoft.Compute/virtualMachineScaleSets:

Enabled encryption at host level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "securityProfile": {
            "encryptionAtHost": true
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      securityProfile: {
        encryptionAtHost: true
      }
    }
  }
}

Enabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "dataDisks": [
              {
                "name": "myDataDisk",
                "managedDisk": {
                  "diskEncryptionSet": {
                    "id": "string"
                  }
                }
              }
            ]
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        dataDisks: [
          {
            name: 'myDataDisk'
            managedDisk: {
              diskEncryptionSet: {
                id: 'string'
              }
            }
          }
        ]
      }
    }
  }
}

Enabled encryption for data disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "dataDisks": [
              {
                "name": "myDataDisk",
                "managedDisk": {
                  "securityProfile": {
                    "diskEncryptionSet": {
                      "id": "string"
                    }
                  }
                }
              }
            ]
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        dataDisks: [
          {
            name: 'myDataDisk'
            managedDisk: {
              securityProfile: {
                diskEncryptionSet: {
                  id: 'string'
                }
              }
            }
          }
        ]
      }
    }
  }
}

Enabled encryption for OS disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "name": "myOsDisk",
              "managedDisk": {
                "diskEncryptionSet": {
                  "id": "string"
                }
              }
            }
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        osDisk: {
          name: 'myOsDisk'
          managedDisk: {
            diskEncryptionSet: {
              id: 'string'
            }
          }
        }
      }
    }
  }
}

Enabled encryption for OS disk through security profile:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "apiVersion": "2022-11-01",
      "properties": {
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "name": "myOsDisk",
              "managedDisk": {
                "securityProfile": {
                  "diskEncryptionSet": {
                    "id": "string"
                  }
                }
              }
            }
          }
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets@2022-11-01' = {
  properties: {
    virtualMachineProfile: {
      storageProfile: {
        osDisk: {
          name: 'myOsDisk'
          managedDisk: {
            securityProfile: {
              diskEncryptionSet: {
                id: 'string'
              }
            }
          }
        }
      }
    }
  }
}

For Microsoft.ContainerService/managedClusters:

Enabled encryption at host and set the disk encryption set ID:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-02-preview",
      "properties": {
        "agentPoolProfiles": [
          {
            "enableEncryptionAtHost": true
          }
        ],
        "diskEncryptionSetID": "string"
      }
    }
  ]
}
resource symbolicname 'Microsoft.ContainerService/managedClusters@2023-03-02-preview' = {
  properties: {
    agentPoolProfiles: [
      {
        enableEncryptionAtHost: true
      }
    ]
    diskEncryptionSetID: 'string'
  }
}

For Microsoft.DataLakeStore/accounts:

Enabled encryption for Data Lake Store:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DataLakeStore/accounts",
      "apiVersion": "2016-11-01",
      "properties": {
        "encryptionState": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DataLakeStore/accounts@2016-11-01' = {
  properties: {
    encryptionState: 'Enabled'
  }
}

For Microsoft.DBforMySQL/servers:

Enabled infrastructure double encryption for MySQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DBforMySQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforMySQL/servers@2017-12-01' = {
  properties: {
    infrastructureEncryption: 'Enabled'
  }
}

For Microsoft.DBforPostgreSQL/servers:

Enabled infrastructure double encryption for PostgreSQL server:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DBforPostgreSQL/servers",
      "apiVersion": "2017-12-01",
      "properties": {
        "infrastructureEncryption": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DBforPostgreSQL/servers@2017-12-01' = {
  properties: {
    infrastructureEncryption: 'Enabled'
  }
}

For Microsoft.DocumentDB/cassandraClusters/dataCenters:

Enabled encryption for a Cassandra Cluster datacenter’s managed disk and backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "cassandraClusters/example",
      "type": "Microsoft.DocumentDB/cassandraClusters/dataCenters",
      "apiVersion": "2023-04-15",
      "properties": {
        "diskCapacity": 4,
        "backupStorageCustomerKeyUri": "string",
        "managedDiskCustomerKeyUri": "string"
      }
    }
  ]
}
resource symbolicname 'Microsoft.DocumentDB/cassandraClusters/dataCenters@2023-04-15' = {
  name: 'string'
  parent: parent
  properties: {
    diskCapacity: 4
    backupStorageCustomerKeyUri: 'string'
    managedDiskCustomerKeyUri: 'string'
  }
}

For Microsoft.HDInsight/clusters:

Enabled encryption for data disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": true
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = {
  properties: {
    computeProfile: {
      roles: [
        {
          encryptDataDisks: true
        }
      ]
    }
  }
}

Enabled encryption for data disk at application level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "clusters/example",
      "type": "Microsoft.HDInsight/clusters/applications",
      "apiVersion": "2021-06-01",
      "properties": {
        "computeProfile": {
          "roles": [
            {
              "encryptDataDisks": true
            }
          ]
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters/applications@2021-06-01' = {
  properties: {
    computeProfile: {
      roles: [
        {
          encryptDataDisks: true
        }
      ]
    }
  }
}

Enabled encryption for resource disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.HDInsight/clusters",
      "apiVersion": "2021-06-01",
      "properties": {
        "diskEncryptionProperties": {
          "encryptionAtHost": true
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.HDInsight/clusters@2021-06-01' = {
  properties: {
    diskEncryptionProperties: {
      encryptionAtHost: true
    }
  }
}

For Microsoft.Kusto/clusters:

Enabled encryption for disk:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Kusto/clusters",
      "apiVersion": "2022-12-29",
      "properties": {
        "enableDiskEncryption": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Kusto/clusters@2022-12-29' = {
  properties: {
    enableDiskEncryption: true
  }
}

For Microsoft.RecoveryServices/vaults:

Enabled encryption on infrastructure:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "properties": {
        "encryption": {
          "infrastructureEncryption": "Enabled"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.RecoveryServices/vaults@2023-01-01' = {
  properties: {
    encryption: {
      infrastructureEncryption: 'Enabled'
    }
  }
}

Enabled encryption on infastructure for backup:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "vaults/example",
      "type": "Microsoft.RecoveryServices/vaults/backupEncryptionConfigs",
      "apiVersion": "2023-01-01",
      "properties": {
        "encryptionAtRestType": "{'CustomerManaged' | 'MicrosoftManaged'}",
        "infrastructureEncryptionState": "Enabled"
      }
    }
  ]
}
resource symbolicname 'Microsoft.RecoveryServices/vaults/backupEncryptionConfigs@2023-01-01' = {
  properties: {
    encryptionAtRestType: '{CustomerManaged | MicrosoftManaged}'
    infrastructureEncryptionState: 'Enabled'
  }
}

For Microsoft.RedHatOpenShift/openShiftClusters:

Enabled disk encryption for master profile and worker profiles:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.RedHatOpenShift/openShiftClusters",
      "apiVersion": "2022-09-04",
      "properties": {
        "masterProfile": {
          "diskEncryptionSetId": "string",
          "encryptionAtHost": "Enabled"
        },
        "workerProfiles": [
          {
            "diskEncryptionSetId": "string",
            "encryptionAtHost": "Enabled"
          }
        ]
      }
    }
  ]
}
resource symbolicname 'Microsoft.RedHatOpenShift/openShiftClusters@2022-09-04' = {
  properties: {
    masterProfile: {
      diskEncryptionSetId: 'string'
      encryptionAtHost: 'Enabled'
    }
    workerProfiles: [
      {
        diskEncryptionSetId: 'string'
        encryptionAtHost: 'Enabled'
      }
    ]
  }
}

For Microsoft.SqlVirtualMachine/sqlVirtualMachines:

Enabled encryption for SQL Virtual Machine:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines",
      "apiVersion": "2022-08-01-preview",
      "properties": {
        "autoBackupSettings": {
          "enableEncryption": true,
          "password": "string"
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.SqlVirtualMachine/sqlVirtualMachines@2022-08-01-preview' = {
  properties: {
    autoBackupSettings: {
      enableEncryption: true
      password: 'string'
    }
  }
}

For Microsoft.Storage/storageAccounts:

Enabled enforcing of infrastructure encryption for double encryption of data:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2022-09-01",
      "properties": {
        "encryption": {
          "requireInfrastructureEncryption": true
        }
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  properties: {
    encryption: {
      requireInfrastructureEncryption: true
    }
  }
}

For Microsoft.Storage/storageAccounts/encryptionScopes:

Enabled enforcing of infrastructure encryption for double encryption of data at encryption scope level:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "storageAccounts/example",
      "type": "Microsoft.Storage/storageAccounts/encryptionScopes",
      "apiVersion": "2022-09-01",
      "properties": {
        "requireInfrastructureEncryption": true
      }
    }
  ]
}
resource symbolicname 'Microsoft.Storage/storageAccounts/encryptionScopes@2022-09-01' = {
  properties: {
    requireInfrastructureEncryption: true
  }
}

See

azureresourcemanager:S6321

Why is this an issue?

Cloud platforms such as Azure support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "networkSecurityGroups/example",
      "type": "Microsoft.Network/networkSecurityGroups/securityRules",
      "apiVersion": "2022-11-01",
      "properties": {
        "protocol": "*",
        "destinationPortRange": "*",
        "sourceAddressPrefix": "*",
        "access": "Allow",
        "direction": "Inbound"
      }
    }
  ]
}
resource securityRules 'Microsoft.Network/networkSecurityGroups/securityRules@2022-11-01' = {
  name: 'securityRules'
  properties: {
    direction: 'Inbound'
    access: 'Allow'
    protocol: '*'
    destinationPortRange: '*'
    sourceAddressPrefix: '*'
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "networkSecurityGroups/example",
      "type": "Microsoft.Network/networkSecurityGroups/securityRules",
      "apiVersion": "2022-11-01",
      "properties": {
          "protocol": "*",
          "destinationPortRange": "22",
          "sourceAddressPrefix": "10.0.0.0/24",
          "access": "Allow",
          "direction": "Inbound"
      }
    }
  ]
}
resource securityRules 'Microsoft.Network/networkSecurityGroups/securityRules@2022-11-01' = {
  name: 'securityRules'
  properties: {
    direction: 'Inbound'
    access: 'Allow'
    protocol: '*'
    destinationPortRange: '22'
    sourceAddressPrefix: '10.0.0.0/24'
  }
}

Resources

Documentation

Standards

azureresourcemanager:S6364

Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident.

Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident.

Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be backed up for a specific amount of time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident.

Sensitive Code Example

For Azure App Service:

resource webApp 'Microsoft.Web/sites@2022-03-01' = {
  name: 'webApp'
}

resource backup 'config@2022-03-01' = {
  name: 'backup'
  parent: webApp
  properties: {
    backupSchedule: {
      frequencyInterval: 1
      frequencyUnit: 'Day'
      keepAtLeastOneBackup: true
      retentionPeriodInDays: 2  // Sensitive
    }
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2022-03-01",
      "name": "webApp",
    },
    {
      "type": "Microsoft.Web/sites/config",
      "apiVersion": "2022-03-01",
      "name": "webApp/backup",
      "properties": {
        "backupSchedule": {
          "frequencyInterval": 1,
          "frequencyUnit": "Day",
          "keepAtLeastOneBackup": true,
          "retentionPeriodInDays": 2
        }
      },
      "dependsOn": [
        "[resourceId('Microsoft.Web/sites', 'webApp')]"
      ]
    }
  ]
}

For Azure Cosmos DB accounts:

resource cosmosDb 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = {
    properties: {
        backupPolicy: {
            type: 'Periodic'
            periodicModeProperties: {
                backupIntervalInMinutes: 1440
                backupRetentionIntervalInHours: 8  // Sensitive
            }
        }
    }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DocumentDB/databaseAccounts",
      "apiVersion": "2023-04-15",
      "properties": {
        "backupPolicy": {
          "type": "Periodic",
          "periodicModeProperties": {
            "backupIntervalInMinutes": 1440,
            "backupRetentionIntervalInHours": 8
          }
        }
      }
    }
  ]
}

For Azure Backup vault policies:

resource vault 'Microsoft.RecoveryServices/vaults@2023-01-01' = {
    name: 'testVault'

    resource backupPolicy 'backupPolicies@2023-01-01' = {
        name: 'backupPolicy'
        properties: {
            backupManagementType: 'AzureSql'
            retentionPolicy: {
                retentionPolicyType: 'SimpleRetentionPolicy'
                retentionDuration: {
                    count: 2  // Sensitive
                    durationType: 'Days'
                }
            }
        }
    }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "name": "testVault",
      "resources": [
        {
          "type": "backupPolicies",
          "apiVersion": "2023-01-01",
          "name": "testVault/backupPolicy",
          "properties": {
            "backupManagementType": "AzureSql",
            "retentionPolicy": {
              "retentionPolicyType": "SimpleRetentionPolicy",
              "retentionDuration": {
                "count": 2,
                "durationType": "Days"
              }
            }
          }
        }
      ]
    }
  ]
}

Compliant Solution

For Azure App Service:

resource webApp 'Microsoft.Web/sites@2022-03-01' = {
  name: 'webApp'
}

resource backup 'config@2022-03-01' = {
  name: 'backup'
  parent: webApp
  properties: {
    backupSchedule: {
      frequencyInterval: 1
      frequencyUnit: 'Day'
      keepAtLeastOneBackup: true
      retentionPeriodInDays: 8
    }
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2022-03-01",
      "name": "webApp",
    },
    {
      "type": "Microsoft.Web/sites/config",
      "apiVersion": "2022-03-01",
      "name": "webApp/backup",
      "properties": {
        "backupSchedule": {
          "frequencyInterval": 1,
          "frequencyUnit": "Day",
          "keepAtLeastOneBackup": true,
          "retentionPeriodInDays": 30
        }
      },
      "dependsOn": [
        "[resourceId('Microsoft.Web/sites', 'webApp')]"
      ]
    }
  ]
}

For Azure Cosmos DB accounts:

resource cosmosDb 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = {
    properties: {
        backupPolicy: {
            type: 'Periodic'
            periodicModeProperties: {
                backupIntervalInMinutes: 1440
                backupRetentionIntervalInHours: 192
            }
        }
    }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.DocumentDB/databaseAccounts",
      "apiVersion": "2023-04-15",
      "properties": {
        "backupPolicy": {
          "type": "Periodic",
          "periodicModeProperties": {
            "backupIntervalInMinutes": 1440,
            "backupRetentionIntervalInHours": 720
          }
        }
      }
    }
  ]
}

For Azure Backup vault policies:

resource vault 'Microsoft.RecoveryServices/vaults@2023-01-01' = {
    name: 'testVault'

    resource backupPolicy 'backupPolicies@2023-01-01' = {
        name: 'backupPolicy'
        properties: {
            backupManagementType: 'AzureSql'
            retentionPolicy: {
                retentionPolicyType: 'SimpleRetentionPolicy'
                retentionDuration: {
                    count: 8
                    durationType: 'Days'
                }
            }
        }
    }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.RecoveryServices/vaults",
      "apiVersion": "2023-01-01",
      "name": "testVault",
      "resources": [
        {
          "type": "backupPolicies",
          "apiVersion": "2023-01-01",
          "name": "testVault/backupPolicy",
          "properties": {
            "backupManagementType": "AzureSql",
            "retentionPolicy": {
              "retentionPolicyType": "SimpleRetentionPolicy",
              "retentionDuration": {
                "count": 30,
                "durationType": "Days"
              }
            }
          }
        }
      ]
    }
  ]
}
azureresourcemanager:S6379

Enabling Azure resource-specific admin accounts can reduce an organization’s ability to protect itself against account or service account thefts.

Full Administrator permissions fail to correctly separate duties and create potentially critical attack vectors on the impacted resources.

In case of abuse of elevated permissions, both the data on which impacted resources operate and their access traceability are at risk.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require this resource to disable its administrative accounts or permissions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Disable the administrative accounts or permissions in this Azure resource.

Sensitive Code Example

For Azure Batch Pools:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Batch/batchAccounts/pools",
      "apiVersion": "2022-10-01",
      "properties": {
        "startTask": {
          "userIdentity": {
            "autoUser": {
              "elevationLevel": "Admin"
            }
          }
        }
      }
    }
  ]
}
resource AdminBatchPool 'Microsoft.Batch/batchAccounts/pools@2022-10-01' = {
  properties: {
    startTask: {
      userIdentity: {
        autoUser: {
          elevationLevel: 'Admin' // Sensitive
        }
      }
    }
  }
}

For Azure Container Registries:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerRegistry/registries",
      "apiVersion": "2023-01-01-preview",
      "properties": {
        "adminUserEnabled": true
      }
    }
  ]
}
resource acrAdminUserDisabled 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
  properties: {
    adminUserEnabled: true // Sensitive
  }
}

Compliant Solution

For Azure Batch Pools:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Batch/batchAccounts/pools",
      "apiVersion": "2022-10-01",
      "properties": {
        "startTask": {
          "userIdentity": {
            "autoUser": {
              "elevationLevel": "NonAdmin"
            }
          }
        }
      }
    }
  ]
}
resource AdminBatchPool 'Microsoft.Batch/batchAccounts/pools@2022-10-01' = {
  properties: {
    startTask: {
      userIdentity: {
        autoUser: {
          elevationLevel: 'NonAdmin'
        }
      }
    }
  }
}

For Azure Container Registries:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerRegistry/registries",
      "apiVersion": "2023-01-01-preview",
      "properties": {
        "adminUserEnabled": false
      }
    }
  ]
}
resource acrAdminUserDisabled 'Microsoft.ContainerRegistry/registries@2021-09-01' = {
  properties: {
    adminUserEnabled: false
  }
}

See

azureresourcemanager:S6380

Allowing anonymous access can reduce an organization’s ability to protect itself against attacks on its Azure resources.

Security incidents may include disrupting critical functions, data theft, and additional Azure subscription costs due to resource overload.

Using authentication coupled with fine-grained authorizations helps bring defense-in-depth and bring traceability to investigators of security incidents.

Depending on the affected Azure resource, multiple authentication choices are possible: Active Directory Authentication, OpenID implementations (Google, Microsoft, etc.) or native Azure mechanisms.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • This Azure resource stores or processes sensitive data.
  • Compliance policies require access to this resource to be authenticated.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Enable authentication in this Azure resource, and disable anonymous access.

If only Basic Authentication is available, enable it.

Sensitive Code Example

For App Service:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "apiVersion": "2022-03-01",
            "name": "example"
        }
    ]
}
resource appService 'Microsoft.Web/sites@2022-09-01' = {
    name: 'example'
    // Sensitive: no authentication defined
}

For API Management:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "example"
        }
    ]
}
resource apiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = {
    name: 'example'
    // Sensitive: no portal authentication defined

    resource apis 'apis@2022-09-01-preview' = {
        name: 'exampleApi'
        properties: {
            path: '/test'
            // Sensitive: no API authentication defined
        }
    }
}

For Data Factory Linked Services:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.DataFactory/factories/linkedservices",
            "apiVersion": "2018-06-01",
            "name": "example",
            "properties": {
                "type": "Web",
                "typeProperties": {
                    "authenticationType": "Anonymous"
                }
            }
        }
    ]
}
resource linkedService 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
    name: 'example'
    properties: {
        type: 'Web'
        typeProperties: {
            authenticationType: 'Anonymous' // Sensitive
        }
    }
}

For Storage Accounts and Storage Containers:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "properties": {
                "allowBlobPublicAccess": true
            }
        }
    ]
}
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
    name: 'example'
    properties: {
        allowBlobPublicAccess: true // Sensitive
    }
}
{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "resources": [
                {
                    "type": "blobServices/containers",
                    "apiVersion": "2022-09-01",
                    "name": "blobContainerExample",
                    "properties": {
                        "publicAccess": "Blob"
                    }
                }
            ]
        }
    ]
}
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
    name: 'example'

    resource blobService 'blobServices@2022-09-01' = {
        name: 'default'

        resource containers 'containers@2022-09-01' = {
            name: 'exampleContainer'
            properties: {
                publicAccess: 'Blob' // Sensitive
            }
        }
    }
}

For Redis Caches:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Cache/redis",
            "apiVersion": "2022-06-01",
            "name": "example",
            "properties": {
                "redisConfiguration": {
                    "authnotrequired": "true"
                }
            }
        }
    ]
}
resource redisCache 'Microsoft.Cache/redis@2023-04-01' = {
    name: 'example'
    location: location
    properties: {
        redisConfiguration: {
            authnotrequired: 'true' // Sensitive
        }
    }
}

Compliant Solution

For App Services and equivalent:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Web/sites",
            "apiVersion": "2022-03-01",
            "name": "example",
            "resources": [
                {
                    "type": "config",
                    "apiVersion": "2022-03-01",
                    "name": "authsettingsV2",
                    "properties": {
                        "globalValidation": {
                            "requireAuthentication": true,
                            "unauthenticatedClientAction": "RedirectToLoginPage"
                        }
                    }
                }
            ]
        }
    ]
}
resource appService 'Microsoft.Web/sites@2022-09-01' = {
    name: 'example'

    resource authSettings 'config@2022-09-01' = { // Compliant
        name: 'authsettingsV2'
        properties: {
            globalValidation: {
                requireAuthentication: true
                unauthenticatedClientAction: 'AllowAnonymous'
            }
            platform: {
                enabled: true
            }
        }
    }
}

For API Management:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.ApiManagement/service",
            "apiVersion": "2022-09-01-preview",
            "name": "example",
            "resources": [
                {
                    "type": "portalsettings",
                    "apiVersion": "2022-09-01-preview",
                    "name": "signin",
                    "properties": {
                        "enabled": true
                    }
                },
                {
                    "type": "apis",
                    "apiVersion": "2022-09-01-preview",
                    "name": "exampleApi",
                    "properties": {
                        "authenticationSettings": {
                            "openid": {
                                "bearerTokenSendingMethods": ["authorizationHeader"],
                                "openidProviderId": "<an OpenID provider ID>"
                            }
                        }
                    }
                }
            ]
        }
    ]
}
resource apiManagementService 'Microsoft.ApiManagement/service@2022-09-01-preview' = {
    name: 'example'

    resource portalSettings 'portalsettings@2022-09-01-preview' = {
        name: 'signin'
        properties: {
            enabled: true // Compliant: Sign-in is enabled for portal access
        }
    }

    resource apis 'apis@2022-09-01-preview' = {
        name: 'exampleApi'
        properties: {
            path: '/test'
            authenticationSettings: { // Compliant: API has authentication enabled
                openid: {
                    bearerTokenSendingMethods: ['authorizationHeader']
                    openidProviderId: '<an OpenID provider ID>'
                }
            }
        }
    }
}

For Data Factory Linked Services:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.DataFactory/factories/linkedservices",
            "apiVersion": "2018-06-01",
            "name": "example",
            "properties": {
                "type": "Web",
                "typeProperties": {
                    "authenticationType": "Basic"
                }
            }
        }
    ]
}
@secure()
@description('The password for authentication')
param password string

resource linkedService 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
    name: 'example'
    properties: {
        type: 'Web'
        typeProperties: {
            authenticationType: 'Basic' // Compliant
            username: 'test'
            password: {
                type: 'SecureString'
                value: password
            }
        }
    }
}

For Storage Accounts:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "properties": {
                "allowBlobPublicAccess": false
            }
        }
    ]
}
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  name: 'example'
  properties: {
    allowBlobPublicAccess: false // Compliant
  }
}
{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2022-09-01",
            "name": "example",
            "resources": [
                {
                    "type": "blobServices/containers",
                    "apiVersion": "2022-09-01",
                    "name": "blobContainerExample",
                    "properties": {
                        "publicAccess": "None"
                    }
                }
            ]
        }
    ]
}
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
    name: 'example'

    resource blobService 'blobServices@2022-09-01' = {
        name: 'default'

        resource containers 'containers@2022-09-01' = {
            name: 'exampleContainer'
            properties: {
                publicAccess: 'None' // Compliant
            }
        }
    }
}

For Redis Caches:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "resources": [
        {
            "type": "Microsoft.Cache/redis",
            "apiVersion": "2022-06-01",
            "name": "example",
            "properties": {
                "redisConfiguration": {}
            }
        }
    ]
}
resource redisCache 'Microsoft.Cache/redis@2023-04-01' = {
    name: 'example'
    location: location
    properties: {
        redisConfiguration: {
            // Compliant: authentication is enabled by default
        }
    }
}

See

azureresourcemanager:S6381

Azure Resource Manager offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.

An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.

This rule raises an issue when one of the following roles is assigned:

  • Contributor (b24988ac-6180-42a0-ab88-20f7382dd24c)
  • Owner (8e3af657-a8ff-443c-a75c-2fe8c4bcb635)
  • User Access Administrator (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9)

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of this extensive set of permissions to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split permissions between multiple users, but it’s not enforced.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the assignment of Owner roles to less than 3 people or service principals.
  • Apply the least privilege principle by choosing a role with a limited set of permissions.
  • If no built-in role meets your needs, create a custom role with as few permissions as possible.

Sensitive Code Example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "properties": {
        "description": "Assign the contributor role",
        "principalId": "string",
        "principalType": "ServicePrincipal",
        "roleDefinitionId": "[resourceId('Microsoft.Authorization/roleDefinitions', 'b24988ac-6180-42a0-ab88-20f7382dd24c')]"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  scope: tenant()
  properties: {
    description: 'Assign the contributor role'
    principalId: 'string'
    principalType: 'ServicePrincipal'
    roleDefinitionId: resourceId('Microsoft.Authorization/roleAssignments', 'b24988ac-6180-42a0-ab88-20f7382dd24c') // Sensitive
  }
}

Compliant Solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "properties": {
        "description": "Assign the reader role",
        "principalId": "string",
        "principalType": "ServicePrincipal",
        "roleDefinitionId": "[resourceId('Microsoft.Authorization/roleDefinitions', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')]"
      }
    }
  ]
}
resource symbolicname 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  scope: tenant()
  properties: {
    description: 'Assign the reader role'
    principalId: 'string'
    principalType: 'ServicePrincipal'
    roleDefinitionId: resourceId('Microsoft.Authorization/roleAssignments', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')
  }
}

See

azureresourcemanager:S6385

Defining a custom role at the Subscription or Management group scope that allows all actions will give it the same capabilities as the built-in Owner role.

Why is this an issue?

In Azure, the Owner role of a Subscription or a Management group provides entities it is assigned to with the maximum level of privileges. The Owner role allows managing all resources and assigning any role to other entities.

Because it is a powerful entitlement, it should be granted to as few users as possible.

When a custom role has the same level of permissions as the Owner one, there are greater chances that high privileges are granted to too many users.

What is the potential impact?

Custom roles that provide the same level of permissions as Owner might indicate a configuration issue. Any entity assigned with it can perform any action on the Subscription or Management group, including changing roles and permissions.

If the affected role is unexpectedly assigned to users, they can compromise the affected scope. They can do so in the long term by assigning dangerous roles to other users or entities.

Depending on the scope to which the role is assignable, the exact impact of a successful exploitation may vary. It generally ranges from data compromise to the takeover of the cloud infrastructure.

Infrastructure takeover

By obtaining the right role, an attacker can gain control over part or all of the Azure infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining.

This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions.

Furthermore, corporate Azure infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data, and to cause more damage to the overall infrastructure.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers with the correct role could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Financial loss

Financial losses can occur when a malicious user is able to use a paid third-party-provided service. Each users assigned with a bad role will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use will lead to added costs with the Azure service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected environment. This might result in a partial denial of service for all legitimate users.

How to fix it

To reduce the risk of intrusion of a compromised owner, it is recommended to limit the number of subscription owners.

Code examples

Noncompliant code example

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleDefinitions",
      "apiVersion": "2022-04-01",
      "properties": {
        "permissions": [
          {
            "actions": ["*"],
            "notActions": []
          }
        ],
        "assignableScopes": [
          "[subscription().id]"
        ]
      }
    }
  ]
}
targetScope = 'managementGroup'

resource roleDef 'Microsoft.Authorization/roleDefinitions@2022-04-01' = { // Sensitive
  properties: {
    permissions: [
      {
        actions: ['*']
        notActions: []
      }
    ]

    assignableScopes: [
      managementGroup().id
    ]
  }
}

Compliant solution

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleDefinitions",
      "apiVersion": "2022-04-01",
      "properties": {
        "permissions": [
          {
            "actions": ["Microsoft.Compute/*"],
            "notActions": []
          }
        ],
        "assignableScopes": [
          "[subscription().id]"
        ]
      }
    }
  ]
}
targetScope = 'managementGroup'

resource roleDef 'Microsoft.Authorization/roleDefinitions@2022-04-01' = {
  properties: {
    permissions: [
      {
        actions: ['Microsoft.Compute/*']
        notActions: []
      }
    ]

    assignableScopes: [
      managementGroup().id
    ]
  }
}

Going the extra mile

Here is a list of recommendations that can be followed regarding good usage of roles:

  • Apply the least privilege principle by creating a custom role with as few permissions as possible.
  • As custom roles can be updated, gradually add atomic permissions when required.
  • Limit the assignable scopes of the custom role to a set of Resources or Resource Groups.
  • When necessary, use the built-in Owner role instead of a custom role granting subscription owner capabilities.
  • Limit the assignments of Owner roles to less than three people or service principals.

Resources

Documentation

Standards

azureresourcemanager:S6387

Azure RBAC roles can be assigned to users, groups, or service principals. A role assignment grants permissions on a predefined set of resources called "scope".

The widest scopes a role can be assigned to are:

  • Subscription: a role assigned with this scope grants access to all resources of this Subscription.
  • Management Group: a scope assigned with this scope grants access to all resources of all the Subscriptions in this Management Group.

In case of security incidents involving a compromised identity (user, group, or service principal), limiting its role assignment to the narrowest scope possible helps separate duties and limits what resources are at risk.

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of the resources in the scope to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split the scope into multiple role assignments with a narrower scope.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the scope of the role assignment to a Resource or Resource Group.
  • Apply the least privilege principle by assigning roles granting as few permissions as possible.

Sensitive Code Example

targetScope = 'subscription' // Sensitive

resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(subscription().id, 'exampleRoleAssignment')
}
{
  "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "name": "[guid(subscription().id, 'exampleRoleAssignment')]"
    }
  ]
}

Compliant Solution

targetScope = 'resourceGroup'

resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(resourceGroup().id, 'exampleRoleAssignment')
}
{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Authorization/roleAssignments",
      "apiVersion": "2022-04-01",
      "name": "[guid(resourceGroup().id, 'exampleRoleAssignment')]"
    }
  ]
}

See

azureresourcemanager:S6413

Defining a short log retention duration can reduce an organization’s ability to backtrace the actions of malicious actors in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require traceability for a longer duration.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the log retention period to an amount of time sufficient enough to be able to investigate and restore service in case of an incident.

Sensitive Code Example

For Azure Firewall Policy:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Network/firewallPolicies",
      "apiVersion": "2022-07-01",
      "properties": {
        "insights": {
          "isEnabled": true,
          "retentionDays": 7
        }
      }
    }
  ]
}
resource firewallPolicy 'Microsoft.Network/firewallPolicies@2022-07-01' = {
  properties: {
    insights: {
      isEnabled: true
      retentionDays: 7  // Sensitive
    }
  }
}

For Microsoft Network Network Watchers Flow Logs:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "networkWatchers/example",
      "type": "Microsoft.Network/networkWatchers/flowLogs",
      "apiVersion": "2022-07-01",
      "properties": {
        "retentionPolicy": {
          "days": 7,
          "enabled": true
        }
      }
    }
  ]
}
resource networkWatchersFlowLogs 'Microsoft.Network/networkWatchers/flowLogs@2022-07-01' = {
  properties: {
    retentionPolicy: {
      days: 7
      enabled: true
    }
  }
}

For Microsoft SQL Servers Auditing Settings:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example/default",
      "type": "Microsoft.Sql/servers/auditingSettings",
      "apiVersion": "2021-11-01",
      "properties": {
        "retentionDays": 7,
        "state": "Enabled"
      }
    }
  ]
}
resource sqlServerAudit 'Microsoft.Sql/servers/auditingSettings@2021-11-01' = {
  properties: {
    retentionDays: 7    // Sensitive
  }
}

This rule also applies to log retention periods that are too short, on the following resources:

Compliant Solution

For Azure Firewall Policy:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.Network/firewallPolicies",
      "apiVersion": "2022-07-01",
      "properties": {
        "insights": {
          "isEnabled": true,
          "retentionDays": 30
        }
      }
    }
  ]
}
resource firewallPolicy 'Microsoft.Network/firewallPolicies@2022-07-01' = {
  properties: {
    insights: {
      isEnabled: true
      retentionDays: 30
    }
  }
}

For Microsoft Network Network Watchers Flow Logs:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "networkWatchers/example",
      "type": "Microsoft.Network/networkWatchers/flowLogs",
      "apiVersion": "2022-07-01",
      "properties": {
        "retentionPolicy": {
          "days": 30,
          "enabled": true
        }
      }
    }
  ]
}
resource networkWatchersFlowLogs 'Microsoft.Network/networkWatchers/flowLogs@2022-07-01' = {
  properties: {
    retentionPolicy: {
      days: 30
      enabled: true
    }
  }
}

For Microsoft SQL Servers Auditing Settings:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example/default",
      "type": "Microsoft.Sql/servers/auditingSettings",
      "apiVersion": "2021-11-01",
      "properties": {
        "retentionDays": 30,
        "state": "Enabled"
      }
    }
  ]
}
resource sqlServerAudit 'Microsoft.Sql/servers/auditingSettings@2021-11-01' = {
  properties: {
    retentionDays: 30
  }
}

Above code also applies to other types defined in previous paragraph.

azureresourcemanager:S6382

Disabling certificate-based authentication can reduce an organization’s ability to react against attacks on its critical functions and data.

Azure offers various authentication options to access resources: Anonymous connections, Basic authentication, password-based authentication, and certificate-based authentication.

Choosing certificate-based authentication helps bring client/host trust by allowing the host to verify the client and vice versa. It cannot be forged or forwarded by a man-in-the-middle eavesdropper, and the certificate’s private key is never sent over the network so it’s harder to steal than a password.

In case of a security incident, certificates help bring investigators traceability and allow security operations teams to react faster. For example, all compromised certificates could be revoked individually, or an issuing certificate could be revoked which causes all the certificates it issued to become untrusted.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be authenticated with certificates.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable certificate-based authentication.

Sensitive Code Example

Where the use of client certificates is controlled by a boolean value, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.SignalRService/webPubSub",
      "apiVersion": "2020-07-01-preview",
      "name": "example",
      "properties": {
        "tls": {
          "clientCertEnabled": false
        }
      }
    }
  ]
}
resource example 'Microsoft.SignalRService/webPubSub@2020-07-01-preview' = {
  name: 'example'
  properties: {
    tls: {
      clientCertEnabled: false // Sensitive
    }
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": false
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: false // Sensitive
  }
}

Where the use of client certificates can be made optional, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": true,
        "clientCertMode": "Optional"
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: true
    clientCertMode: 'Optional' // Sensitive
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.App/containerApps",
      "apiVersion": "2022-10-01",
      "name": "example",
      "properties": {
        "configuration": {
          "ingress": {
            "clientCertificateMode": "accept"
          }
        }
      }
    }
  ]
}
resource example 'Microsoft.App/containerApps@2022-10-01' = {
  name: 'example'
  properties: {
    configuration: {
      ingress: {
        clientCertificateMode: 'accept' // Sensitive
      }
    }
  }
}

Where client certificates can be used to authenticate outbound requests, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DataFactory/factories/linkedservices",
      "apiVersion": "2018-06-01",
      "name": "factories/example",
      "properties": {
        "type": "Web",
        "typeProperties": {
          "authenticationType": "Basic"
        }
      }
    }
  ]
}
resource example 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
  name: 'example'
  properties: {
    type: 'Web'
    typeProperties: {
      authenticationType: 'Basic' // Sensitive
    }
  }
}

Where a list of permitted client certificates must be provided, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/cassandraClusters",
      "apiVersion": "2021-10-15",
      "name": "example",
      "properties": {
        "clientCertificates": []
      }
    }
  ]
}
resource example 'Microsoft.DocumentDB/cassandraClusters@2021-10-15' = {
  name: 'example'
  properties: {
    clientCertificates: [] // Sensitive
  }
}

Where a resouce can use both certificate-based and password-based authentication, such as:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerRegistry/registries/tokens",
      "apiVersion": "2022-12-01",
      "name": "registries/example",
      "properties": {
        "credentials": {
          "passwords": [
            {
              "name": "password1"
            }
          ]
        }
      }
    }
  ]
}
resource example 'Microsoft.ContainerRegistry/registries/tokens@2022-12-01' = {
  name: 'example'
  properties: {
    credentials: {
      passwords: [ // Sensitive
        {
          name: 'password1'
        }
      ]
    }
  }
}

Compliant Solution

Where the use of client certificates is controlled by a boolean value:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.SignalRService/webPubSub",
      "apiVersion": "2020-07-01-preview",
      "name": "example",
      "properties": {
        "tls": {
          "clientCertEnabled": true
        }
      }
    }
  ]
}
resource example 'Microsoft.SignalRService/webPubSub@2020-07-01-preview' = {
  name: 'example'
  properties: {
    tls: {
      clientCertEnabled: true
    }
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": true,
        "clientCertMode": "Required"
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: true
    clientCertMode: 'Required'
  }
}

Where the use of client certificates can be made optional:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.Web/sites",
      "apiVersion": "2015-08-01",
      "name": "example",
      "properties": {
        "clientCertEnabled": true,
        "clientCertMode": "Required"
      }
    }
  ]
}
resource example 'Microsoft.Web/sites@2015-08-01' = {
  name: 'example'
  properties: {
    clientCertEnabled: true
    clientCertMode: 'Required'
  }
}
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.App/containerApps",
      "apiVersion": "2022-10-01",
      "name": "example",
      "properties": {
        "configuration": {
          "ingress": {
            "clientCertificateMode": "require"
          }
        }
      }
    }
  ]
}
resource example 'Microsoft.App/containerApps@2022-10-01' = {
  name: 'example'
  properties: {
    configuration: {
      ingress: {
        clientCertificateMode: 'require'
      }
    }
  }
}

Where client certificates can be used to authenticate outbound requests:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DataFactory/factories/linkedservices",
      "apiVersion": "2018-06-01",
      "name": "example",
      "properties": {
        "type": "Web",
        "typeProperties": {
          "authenticationType": "ClientCertificate"
        }
      }
    }
  ]
}
resource example 'Microsoft.DataFactory/factories/linkedservices@2018-06-01' = {
  name: 'example'
  properties: {
    type: 'Web'
    typeProperties: {
      authenticationType: 'ClientCertificate'
    }
  }
}

Where a list of permitted client certificates must be provided:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.DocumentDB/cassandraClusters",
      "apiVersion": "2021-10-15",
      "name": "example",
      "properties": {
        "clientCertificates": [
          {
            "pem": "[base64-encoded certificate]"
          }
        ]
      }
    }
  ]
}
resource example 'Microsoft.DocumentDB/cassandraClusters@2021-10-15' = {
  name: 'example'
  properties: {
    clientCertificates: [
      {
        pem: '[base64-encoded certificate]'
      }
    ]
  }
}

Where a resouce can use both certificate-based and password-based authentication:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "type": "Microsoft.ContainerRegistry/registries/tokens",
      "apiVersion": "2022-12-01",
      "name": "example",
      "properties": {
        "credentials": {
          "certificates": [
            {
              "name": "certificate1",
              "encodedPemCertificate": "[base64-encoded certificate]"
            }
          ]
        }
      }
    }
  ]
}
resource example 'Microsoft.ContainerRegistry/registries/tokens@2022-12-01' = {
  name: 'example'
  properties: {
    credentials: {
      certificates: [
        {
          name: 'certificate1'
          encodedPemCertificate: '[base64-encoded certificate]'
        }
      ]
    }
  }
}

See

azureresourcemanager:S6383

Disabling Role-Based Access Control (RBAC) on Azure resources can reduce an organization’s ability to protect itself against access controls being compromised.

To be considered safe, access controls must follow the principle of least privilege and correctly segregate duties amongst users. RBAC helps enforce these practices by adapting the organization’s access control needs into explicit role-based policies: It helps keeping access controls maintainable and sustainable.

Furthermore, RBAC allows operations teams to work faster during a security incident. It helps to mitigate account theft or intrusions by quickly shutting down accesses.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be enforced through the use of Role-Based Access Control.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Enable Azure RBAC when the Azure resource supports it.
  • For Kubernetes clusters, enable Azure RBAC if Azure AD integration is supported. Otherwise, use the built-in Kubernetes RBAC.

Sensitive Code Example

For AKS Azure Kubernetes Service:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-01",
      "properties": {
        "aadProfile": {
          "enableAzureRBAC": false
        },
        "enableRBAC": false
      }
    }
  ]
}
resource aks 'Microsoft.ContainerService/managedClusters@2023-03-01' = {
  properties: {
    aadProfile: {
      enableAzureRBAC: false    // Sensitive
    }
    enableRBAC: false           // Sensitive
  }
}

For Key Vault:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.KeyVault/vaults",
      "apiVersion": "2022-07-01",
      "properties": {
        "enableRbacAuthorization": false
      }
    }
  ]
}
resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01' = {
  properties: {
    enableRbacAuthorization: false    // Sensitive
  }
}

Compliant Solution

For AKS Azure Kubernetes Service:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.ContainerService/managedClusters",
      "apiVersion": "2023-03-01",
      "properties": {
        "aadProfile": {
          "enableAzureRBAC": true
        },
        "enableRBAC": true
      }
    }
  ]
}
resource aks 'Microsoft.ContainerService/managedClusters@2023-03-01' = {
  properties: {
    aadProfile: {
      enableAzureRBAC: true     // Compliant
    }
    enableRBAC: true            // Compliant
  }
}

For Key Vault:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "resources": [
    {
      "name": "example",
      "type": "Microsoft.KeyVault/vaults",
      "apiVersion": "2022-07-01",
      "properties": {
        "enableRbacAuthorization": true
      }
    }
  ]
}
resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01' = {
  properties: {
    enableRbacAuthorization: true    // Compliant
  }
}

See

terraform:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

For AWS Kinesis Data Streams server-side encryption:

resource "aws_kinesis_stream" "sensitive_stream" {
    encryption_type = "NONE" # Sensitive
}

For Amazon ElastiCache:

resource "aws_elasticache_replication_group" "example" {
    replication_group_id = "example"
    replication_group_description = "example"
    transit_encryption_enabled = false  # Sensitive
}

For Amazon ECS:

resource "aws_ecs_task_definition" "ecs_task" {
  family = "service"
  container_definitions = file("task-definition.json")

  volume {
    name = "storage"
    efs_volume_configuration {
      file_system_id = aws_efs_file_system.fs.id
      transit_encryption = "DISABLED"  # Sensitive
    }
  }
}

For Amazon OpenSearch domains:

resource "aws_elasticsearch_domain" "example" {
  domain_name = "example"
  domain_endpoint_options {
    enforce_https = false # Sensitive
  }
  node_to_node_encryption {
    enabled = false # Sensitive
  }
}

For Amazon MSK communications between clients and brokers:

resource "aws_msk_cluster" "sensitive_data_cluster" {
    encryption_info {
        encryption_in_transit {
            client_broker = "TLS_PLAINTEXT" # Sensitive
            in_cluster = false # Sensitive
        }
    }
}

For AWS Load Balancer Listeners:

resource "aws_lb_listener" "front_load_balancer" {
  protocol = "HTTP" # Sensitive

  default_action {
    type = "redirect"

    redirect {
      protocol = "HTTP"
    }
  }
}

HTTP protocol is used for GCP Region Backend Services:

resource "google_compute_region_backend_service" "example" {
  name                            = "example-service"
  region                          = "us-central1"
  health_checks                   = [google_compute_region_health_check.region.id]
  connection_draining_timeout_sec = 10
  session_affinity                = "CLIENT_IP"
  load_balancing_scheme           = "EXTERNAL"
  protocol                        = "HTTP" # Sensitive
}

Compliant Solution

For AWS Kinesis Data Streams server-side encryption:

resource "aws_kinesis_stream" "compliant_stream" {
    encryption_type = "KMS"
}

For Amazon ElastiCache:

resource "aws_elasticache_replication_group" "example" {
    replication_group_id = "example"
    replication_group_description = "example"
    transit_encryption_enabled = true
}

For Amazon ECS:

resource "aws_ecs_task_definition" "ecs_task" {
  family = "service"
  container_definitions = file("task-definition.json")

  volume {
    name = "storage"
    efs_volume_configuration {
      file_system_id = aws_efs_file_system.fs.id
      transit_encryption = "ENABLED"
    }
  }
}

For Amazon OpenSearch domains:

resource "aws_elasticsearch_domain" "example" {
  domain_name = "example"
  domain_endpoint_options {
    enforce_https = true
  }
  node_to_node_encryption {
    enabled = true
  }
}

For Amazon MSK communications between clients and brokers, data in transit is encrypted by default, allowing you to omit writing the encryption_in_transit configuration. However, if you need to configure it explicitly, this configuration is compliant:

resource "aws_msk_cluster" "sensitive_data_cluster" {
    encryption_info {
        encryption_in_transit {
            client_broker = "TLS"
            in_cluster = true
        }
    }
}

For AWS Load Balancer Listeners:

resource "aws_lb_listener" "front_load_balancer" {
  protocol = "HTTP"

  default_action {
    type = "redirect"

    redirect {
      protocol = "HTTPS"
    }
  }
}

HTTPS protocol is used for GCP Region Backend Services:

resource "google_compute_region_backend_service" "example" {
  name                            = "example-service"
  region                          = "us-central1"
  health_checks                   = [google_compute_region_health_check.region.id]
  connection_draining_timeout_sec = 10
  session_affinity                = "CLIENT_IP"
  load_balancing_scheme           = "EXTERNAL"
  protocol                        = "HTTPS"
}

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

terraform:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy for AWS that grants all permissions by using the wildcard (*) in the Action property:

resource "aws_iam_policy" "example" {
  name = "noncompliantpolicy"

  policy = jsonencode({
    Version   = "2012-10-17"
    Statement = [
      {
        Action   = [
          "*" # Sensitive
        ]
        Effect   = "Allow"
        Resource = [
          aws_s3_bucket.mybucket.arn
        ]
      }
    ]
  })
}

A customer-managed policy for GCP that grants all permissions by using the actions admin role role property:

resource "google_project_iam_binding" "example" {
  project = "example"
  role    = "roles/owner" # Sensitive

  members = [
    "user:jane@example.com",
  ]
}

Compliant Solution

A customer-managed policy for AWS that grants only the required permissions:

resource "aws_iam_policy" "example" {
  name = "compliantpolicy"

  policy = jsonencode({
    Version   = "2012-10-17"
    Statement = [
      {
        Action   = [
          "s3:GetObject"
        ]
        Effect   = "Allow"
        Resource = [
          aws_s3_bucket.mybucket.arn
        ]
      }
    ]
  })
}

A customer-managed policy for GCP that grants restricted permissions by using the actions admin role role property:

resource "google_project_iam_binding" "example" {
  project = "example"
  role    = "roles/actions.Viewer"

  members = [
    "user:jane@example.com",
  ]
}

See

terraform:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws_db_instance and aws_rds_cluster:

resource "aws_db_instance" "example" {
  storage_encrypted = false # Sensitive, disabled by default
}

resource "aws_rds_cluster" "example" {
  storage_encrypted = false # Sensitive, disabled by default
}

Compliant Solution

For aws_db_instance and aws_rds_cluster:

resource "aws_db_instance" "example" {
  storage_encrypted = true
}

resource "aws_rds_cluster" "example" {
  storage_encrypted = true
}

See

terraform:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

Update permission is granted for all policies using the wildcard (*) in the Resource property:

resource "aws_iam_policy" "noncompliantpolicy" {
  name        = "noncompliantpolicy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "iam:CreatePolicyVersion"
        ]
        Effect   = "Allow"
        Resource = [
          "*" # Sensitive
        ]
      }
    ]
  })
}

Compliant Solution

Restrict update permission to the appropriate subset of policies:

resource "aws_iam_policy" "compliantpolicy" {
  name        = "compliantpolicy"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "iam:CreatePolicyVersion"
        ]
        Effect   = "Allow"
        Resource = [
          "arn:aws:iam::${data.aws_caller_identity.current.account_id}:policy/team1/*"
        ]
      }
    ]
  })
}

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

terraform:S6388

Using unencrypted cloud storages can lead to data exposure. In the case that adversaries gain physical access to the storage medium they are able to access unencrypted information.

Ask Yourself Whether

  • The service contains sensitive information that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt cloud storages that contain sensitive information.

Sensitive Code Example

For azurerm_data_lake_store:

resource "azurerm_data_lake_store" "store" {
  name             = "store"
  encryption_state = "Disabled"  # Sensitive
}

Compliant Solution

For azurerm_data_lake_store:

resource "azurerm_data_lake_store" "store" {
  name             = "store"
  encryption_state = "Enabled"
  encryption_type  = "ServiceManaged"
}

See

terraform:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PublicRead, PublicReadWrite grant respectively "read" and "read and write" privileges to everyone in the world (AllUsers group).
  • AuthenticatedRead grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, ie to grant necessary permissions only to users for their required tasks. In the context of canned ACL, set it to private (the default one) and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users (ie: anyone in the world authenticated or not) have read and write permissions with the public-read-write access control:

resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive
  bucket = "mynoncompliantbucketname"
  acl    = "public-read-write"
}

Compliant Solution

With the private access control (default), only the bucket owner has the read/write permissions on the buckets and its ACL.

resource "aws_s3_bucket" "mycompliantbucket" { # Compliant
  bucket = "mycompliantbucketname"
  acl    = "private"
}

See

terraform:S6308

Amazon Elasticsearch Service (ES) is a managed service to host Elasticsearch instances.

To harden domain (cluster) data in case of unauthorized access, ES provides data-at-rest encryption if the Elasticsearch version is 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, if adversaries gain physical access to the storage medium, they cannot access the data.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt Elasticsearch domains that contain sensitive information.

Encryption and decryption are handled transparently by ES, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_elasticsearch_domain:

resource "aws_elasticsearch_domain" "elasticsearch" {
  encrypt_at_rest {
    enabled = false  # Sensitive, disabled by default
  }
}

Compliant Solution

For aws_elasticsearch_domain:

resource "aws_elasticsearch_domain" "elasticsearch" {
  encrypt_at_rest {
    enabled = true
  }
}

See

terraform:S6380

Allowing anonymous access can reduce an organization’s ability to protect itself against attacks on its Azure resources.

Security incidents may include disrupting critical functions, data theft, and additional Azure subscription costs due to resource overload.

Using authentication coupled with fine-grained authorizations helps bring defense-in-depth and bring traceability to investigators of security incidents.

Depending on the affected Azure resource, multiple authentication choices are possible: Active Directory Authentication, OpenID implementations (Google, Microsoft, etc.) or native Azure mechanisms.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • This Azure resource stores or processes sensitive data.
  • Compliance policies require access to this resource to be authenticated.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Enable authentication in this Azure resource, and disable anonymous access.

If only Basic Authentication is available, enable it.

Sensitive Code Example

For App Services and equivalent:

resource "azurerm_function_app" "example" {
  name = "example"

  auth_settings {
    enabled = false # Sensitive
  }

  auth_settings {
    enabled = true
    unauthenticated_client_action = "AllowAnonymous" # Sensitive
  }
}

For API Management:

resource "azurerm_api_management_api" "example" { # Sensitive, the openid_authentication block is missing
  name = "example-api"
}

resource "azurerm_api_management" "example" {
  sign_in {
    enabled = false # Sensitive
  }
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_sftp" "example" {
  authentication_type = "Anonymous"
}

For Storage Accounts:

resource "azurerm_storage_account" "example" {
  allow_blob_public_access = true # Sensitive
}

resource "azurerm_storage_container" "example" {
  container_access_type = "blob" # Sensitive
}

For Redis Caches:

resource "azurerm_redis_cache" "example" {
  name = "example-cache"

  redis_configuration {
    enable_authentication = false # Sensitive
  }
}

Compliant Solution

For App Services and equivalent:

resource "azurerm_function_app" "example" {
  name = "example"

  auth_settings {
    enabled = true
    unauthenticated_client_action = "RedirectToLoginPage"
  }
}

For API Management:

resource "azurerm_api_management_api" "example" {
  name = "example-api"

  openid_authentication {
    openid_provider_name = azurerm_api_management_openid_connect_provider.example.name
  }
}

resource "azurerm_api_management" "example" {
  sign_in {
    enabled = true
  }
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_sftp" "example" {
  authentication_type = "Basic"
  username            = local.creds.username
  password            = local.creds.password
}

resource "azurerm_data_factory_linked_service_odata" "example" {
  basic_authentication {
    username = local.creds.username
    password = local.creds.password
  }
}

For Storage Accounts:

resource "azurerm_storage_account" "example" {
  allow_blob_public_access = true
}

resource "azurerm_storage_container" "example" {
  container_access_type = "private"
}

For Redis Caches:

resource "azurerm_redis_cache" "example" {
  name = "example-cache"

  redis_configuration {
    enable_authentication = true
  }
}

See

terraform:S6381

Azure Resource Manager offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.

An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.

This rule raises an issue when one of the following roles is assigned:

  • Contributor (b24988ac-6180-42a0-ab88-20f7382dd24c)
  • Owner (8e3af657-a8ff-443c-a75c-2fe8c4bcb635)
  • User Access Administrator (18d7d88d-d35e-4fb5-a5c3-7773c20a72d9)

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of this extensive set of permissions to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split permissions between multiple users, but it’s not enforced.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the assignment of Owner roles to less than 3 people or service principals.
  • Apply the least privilege principle by choosing a role with a limited set of permissions.
  • If no built-in role meets your needs, create a custom role with as few permissions as possible.

Sensitive Code Example

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Owner" # Sensitive
  principal_id         = data.azuread_user.example.id
}

Compliant Solution

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Azure Maps Data Reader"
  principal_id         = data.azuread_user.example.id
}

See

terraform:S6382

Disabling certificate-based authentication can reduce an organization’s ability to react against attacks on its critical functions and data.

Azure offers various authentication options to access resources: Anonymous connections, Basic authentication, password-based authentication, and certificate-based authentication.

Choosing certificate-based authentication helps bring client/host trust by allowing the host to verify the client and vice versa. It cannot be forged or forwarded by a man-in-the-middle eavesdropper, and the certificate’s private key is never sent over the network so it’s harder to steal than a password.

In case of a security incident, certificates help bring investigators traceability and allow security operations teams to react faster. For example, all compromised certificates could be revoked individually, or an issuing certificate could be revoked which causes all the certificates it issued to become untrusted.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be authenticated with certificates.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable certificate-based authentication.

Sensitive Code Example

For App Service:

resource "azurerm_app_service" "example" {
  client_cert_enabled = false # Sensitive
}

For Logic App Standards and Function Apps:

resource "azurerm_function_app" "example" {
  client_cert_mode = "Optional" # Sensitive
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_web" "example" {
  authentication_type = "Basic" # Sensitive
}

For API Management:

resource "azurerm_api_management" "example" {
  sku_name = "Consumption_1"
  client_certificate_mode = "Optional" # Sensitive
}

For Linux and Windows Web Apps:

resource "azurerm_linux_web_app" "example" {
  client_cert_enabled = false # Sensitive
}
resource "azurerm_linux_web_app" "exemple2" {
  client_cert_enabled = true
  client_cert_mode = "Optional" # Sensitive
}

Compliant Solution

For App Service:

resource "azurerm_app_service" "example" {
  client_cert_enabled = true
}

For Logic App Standards and Function Apps:

resource "azurerm_function_app" "example" {
  client_cert_mode = "Required"
}

For Data Factory Linked Services:

resource "azurerm_data_factory_linked_service_web" "example" {
  authentication_type = "ClientCertificate"
}

For API Management:

resource "azurerm_api_management" "example" {
  sku_name = "Consumption_1"
  client_certificate_mode = "Required"
}

For Linux and Windows Web Apps:

resource "azurerm_linux_web_app" "exemple" {
  client_cert_enabled = true
  client_cert_mode = "Required"
}

See

terraform:S6383

Disabling Role-Based Access Control (RBAC) on Azure resources can reduce an organization’s ability to protect itself against access controls being compromised.

To be considered safe, access controls must follow the principle of least privilege and correctly segregate duties amongst users. RBAC helps enforce these practices by adapting the organization’s access control needs into explicit role-based policies: It helps keeping access controls maintainable and sustainable.

Furthermore, RBAC allows operations teams to work faster during a security incident. It helps to mitigate account theft or intrusions by quickly shutting down accesses.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be enforced through the use of Role-Based Access Control.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Enable Azure RBAC when the Azure resource supports it.
  • For Kubernetes clusters, enable Azure RBAC if Azure AD integration is supported. Otherwise, use the built-in Kubernetes RBAC.

Sensitive Code Example

For Azure Kubernetes Services:

resource "azurerm_kubernetes_cluster" "example" {
  role_based_access_control {
    enabled = false # Sensitive
  }
}

resource "azurerm_kubernetes_cluster" "example2" {
  role_based_access_control {
    enabled = true

    azure_active_directory {
      managed = true
      azure_rbac_enabled = false # Sensitive
    }
  }
}

For Key Vaults:

resource "azurerm_key_vault" "example" {
  enable_rbac_authorization = false # Sensitive
}

Compliant Solution

For Azure Kubernetes Services:

resource "azurerm_kubernetes_cluster" "example" {
  role_based_access_control {
    enabled = true
  }
}

resource "azurerm_kubernetes_cluster" "example" {
  role_based_access_control {
    enabled = true

    azure_active_directory {
      managed = true
      azure_rbac_enabled = true
    }
  }
}

For Key Vaults:

resource "azurerm_key_vault" "example" {
  enable_rbac_authorization   = true
}

See

terraform:S6385

Defining a custom role at the Subscription or Management group scope that allows all actions will give it the same capabilities as the built-in Owner role.

Why is this an issue?

In Azure, the Owner role of a Subscription or a Management group provides entities it is assigned to with the maximum level of privileges. The Owner role allows managing all resources and assigning any role to other entities.

Because it is a powerful entitlement, it should be granted to as few users as possible.

When a custom role has the same level of permissions as the Owner one, there are greater chances that high privileges are granted to too many users.

What is the potential impact?

Custom roles that provide the same level of permissions as Owner might indicate a configuration issue. Any entity assigned with it can perform any action on the Subscription or Management group, including changing roles and permissions.

If the affected role is unexpectedly assigned to users, they can compromise the affected scope. They can do so in the long term by assigning dangerous roles to other users or entities.

Depending on the scope to which the role is assignable, the exact impact of a successful exploitation may vary. It generally ranges from data compromise to the takeover of the cloud infrastructure.

Infrastructure takeover

By obtaining the right role, an attacker can gain control over part or all of the Azure infrastructure. They can modify DNS settings, redirect traffic, or launch malicious instances that can be used for various nefarious activities, including launching DDoS attacks, hosting phishing websites, or distributing malware. Malicious instances may also be used for resource-intensive tasks such as cryptocurrency mining.

This can result in legal liability, but also increased costs, degraded performance, and potential service disruptions.

Furthermore, corporate Azure infrastructures are often connected to other services and to the internal networks of the organization. Because of this, cloud infrastructure is often used by attackers as a gateway to other assets. Attackers can leverage this gateway to gain access to more services, to compromise more business-critical data, and to cause more damage to the overall infrastructure.

Compromise of sensitive data

If the affected service is used to store or process personally identifiable information or other sensitive data, attackers with the correct role could be able to access it. Depending on the type of data that is compromised, it could lead to privacy violations, identity theft, financial loss, or other negative outcomes.

In most cases, a company suffering a sensitive data compromise will face a reputational loss when the security issue is publicly disclosed.

Financial loss

Financial losses can occur when a malicious user is able to use a paid third-party-provided service. Each users assigned with a bad role will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use will lead to added costs with the Azure service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected environment. This might result in a partial denial of service for all legitimate users.

How to fix it

To reduce the risk of intrusion of a compromised owner, it is recommended to limit the number of subscription owners.

Code examples

Noncompliant code example

resource "azurerm_role_definition" "example" { # Sensitive
  name        = "example"
  scope       = data.azurerm_subscription.primary.id

  permissions {
    actions     = ["*"]
    not_actions = []
  }

  assignable_scopes = [
    data.azurerm_subscription.primary.id
  ]
}

Compliant solution

resource "azurerm_role_definition" "example" {
  name        = "example"
  scope       = data.azurerm_subscription.primary.id

  permissions {
    actions     = ["Microsoft.Compute/*"]
    not_actions = []
  }

  assignable_scopes = [
    data.azurerm_subscription.primary.id
  ]
}

Going the extra mile

Here is a list of recommendations that can be followed regarding good usage of roles:

  • Apply the least privilege principle by creating a custom role with as few permissions as possible.
  • As custom roles can be updated, gradually add atomic permissions when required.
  • Limit the assignable scopes of the custom role to a set of Resources or Resource Groups.
  • When necessary, use the built-in Owner role instead of a custom role granting subscription owner capabilities.
  • Limit the assignments of Owner roles to less than three people or service principals.

Resources

Documentation

Standards

terraform:S6387

Azure RBAC roles can be assigned to users, groups, or service principals. A role assignment grants permissions on a predefined set of resources called "scope".

The widest scopes a role can be assigned to are:

  • Subscription: a role assigned with this scope grants access to all resources of this Subscription.
  • Management Group: a scope assigned with this scope grants access to all resources of all the Subscriptions in this Management Group.

In case of security incidents involving a compromised identity (user, group, or service principal), limiting its role assignment to the narrowest scope possible helps separate duties and limits what resources are at risk.

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of the resources in the scope to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split the scope into multiple role assignments with a narrower scope.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the scope of the role assignment to a Resource or Resource Group.
  • Apply the least privilege principle by assigning roles granting as few permissions as possible.

Sensitive Code Example

resource "azurerm_role_assignment" "example" {
  scope                = data.azurerm_subscription.primary.id # Sensitive
  role_definition_name = "Reader"
  principal_id         = data.azuread_user.user.object_id
}

Compliant Solution

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Reader"
  principal_id         = data.azuread_user.user.object_id
}

See

terraform:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in AWS API Gateway

Code examples

These code samples illustrate how to fix this issue in both APIGateway and ApiGatewayV2.

Noncompliant code example

resource "aws_api_gateway_domain_name" "example" {
  domain_name = "api.example.com"
  security_policy = "TLS_1_0" # Noncompliant
}

The ApiGatewayV2 uses a weak TLS version by default:

resource "aws_apigatewayv2_domain_name" "example" {
  domain_name = "api.example.com"
  domain_name_configuration {} # Noncompliant
}

Compliant solution

resource "aws_api_gateway_domain_name" "example" {
  domain_name = "api.example.com"
  security_policy = "TLS_1_2"
}
resource "aws_apigatewayv2_domain_name" "example" {
  domain_name = "api.example.com"
  domain_name_configuration {
    security_policy = "TLS_1_2"
  }
}

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

terraform:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

resource "aws_s3_bucket_policy" "mynoncompliantpolicy" {  # Sensitive
  bucket = aws_s3_bucket.mybucket.id
  policy = jsonencode({
    Id = "mynoncompliantpolicy"
    Version = "2012-10-17"
    Statement = [{
            Effect = "Allow"
            Principal = {
                AWS = "*"
            }
            Action = [
                "s3:PutObject"
            ]
            Resource: "${aws_s3_bucket.mybucket.arn}/*"
        }
    ]
  })
}

Compliant Solution

This policy allows only the authorized users:

resource "aws_s3_bucket_policy" "mycompliantpolicy" {
  bucket = aws_s3_bucket.mybucket.id
  policy = jsonencode({
    Id = "mycompliantpolicy"
    Version = "2012-10-17"
    Statement = [{
            Effect = "Allow"
            Principal = {
                AWS = [
                    "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
                ]
            }
            Action = [
                "s3:PutObject"
            ]
            Resource = "${aws_s3_bucket.mybucket.arn}/*"
        }
    ]
  })
}

See

terraform:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_ebs_volume:

resource "aws_ebs_volume" "ebs_volume" {  # Sensitive as encryption is disabled by default
}
resource "aws_ebs_volume" "ebs_volume" {
  encrypted = false  # Sensitive
}

For aws_ebs_encryption_by_default:

resource "aws_ebs_encryption_by_default" "default_encryption" {
  enabled = false  # Sensitive
}

For aws_launch_configuration:

resource "aws_launch_configuration" "launch_configuration" {
  root_block_device {  # Sensitive as encryption is disabled by default
  }
  ebs_block_device {  # Sensitive as encryption is disabled by default
  }
}
resource "aws_launch_configuration" "launch_configuration" {
  root_block_device {
    encrypted = false  # Sensitive
  }
  ebs_block_device {
    encrypted = false  # Sensitive
  }
}

Compliant Solution

For aws_ebs_volume:

resource "aws_ebs_volume" "ebs_volume" {
  encrypted = true
}

For aws_ebs_encryption_by_default:

resource "aws_ebs_encryption_by_default" "default_encryption" {
  enabled = true  # Optional, default is "true"
}

For aws_launch_configuration:

resource "aws_launch_configuration" "launch_configuration" {
  root_block_device {
    encrypted = true
  }
  ebs_block_device {
    encrypted = true
  }
}

See

terraform:S6317

Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access.

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

For such policies, it is easy to define very broad permissions (by using wildcard "*" permissions for example.) This is especially true if it is not yet clear which permissions will be required for a specific workload or use case. However, it is important to limit the amount of permissions that are granted and the amount of resources to which these permissions are granted. Doing so ensures that there are no users or roles that have more permissions than they need.

If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities.

What is the potential impact?

AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope.

Privilege escalation

When IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities.

For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account.

How to fix it in AWS Identity and Access Management

Code examples

In this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

Noncompliant code example

resource "aws_iam_policy" "example" {
  name = "example"
  policy =<<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:UpdateFunctionCode"
            ],
            "Resource": "*"
        }
    ]
}
EOF
}

Compliant solution

The policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed.

resource "aws_iam_policy" "example" {
  name = "example"
  policy =<<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "lambda:UpdateFunctionCode"
            ],
            "Resource": "arn:aws:lambda:us-east-2:123456789012:function:my-function:1"
        }
    ]
}
EOF
}

How does this work?

Principle of least privilege

When creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else.

To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used.

Resources

Documentation

Articles & blog posts

Standards

terraform:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_sagemaker_notebook_instance:

resource "aws_sagemaker_notebook_instance" "notebook" {  # Sensitive, encryption disabled by default
}

Compliant Solution

For aws_sagemaker_notebook_instance:

resource "aws_sagemaker_notebook_instance" "notebook" {
  kms_key_id = aws_kms_key.enc_key.key_id
}

See

terraform:S6327

Amazon Simple Notification Service (SNS) is a managed messaging service for application-to-application (A2A) and application-to-person (A2P) communication. SNS topics allows publisher systems to fanout messages to a large number of subscriber systems. Amazon SNS allows to encrypt messages when they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The topic contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SNS topics that contain sensitive information. Encryption and decryption are handled transparently by SNS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_sns_topic:

resource "aws_sns_topic" "topic" {  # Sensitive, encryption disabled by default
  name = "sns-unencrypted"
}

Compliant Solution

For aws_sns_topic:

resource "aws_sns_topic" "topic" {
  name = "sns-encrypted"
  kms_master_key_id = aws_kms_key.enc_key.key_id
}

See

terraform:S6403

By default, GCP SQL instances offer encryption in transit, with support for TLS, but insecure connections are still accepted. On an unsecured network, such as a public network, the risk of traffic being intercepted is high. When the data isn’t encrypted, an attacker can intercept it and read confidential information.

When creating a GCP SQL instance, a public IP address is automatically assigned to it and connections to the SQL instance from public networks can be authorized.

TLS is automatically used when connecting to SQL instances through:

Ask Yourself Whether

Connections are not already automatically encrypted by GCP (eg: SQL Auth proxy) and

  • Connections to the SQL instance are performed on untrusted networks.
  • The data stored in the SQL instance is confidential.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt all connections to the SQL instance, whether using public or private IP addresses. However, since private networks can be considered trusted, requiring TLS in this situation is usually a lower priority task.

Sensitive Code Example

resource "google_sql_database_instance" "example" { # Sensitive: tls is not required
  name             = "noncompliant-master-instance"
  database_version = "POSTGRES_11"
  region           = "us-central1"

  settings {
    tier = "db-f1-micro"
  }
}

Compliant Solution

resource "google_sql_database_instance" "example" {
  name             = "compliant-master-instance"
  database_version = "POSTGRES_11"
  region           = "us-central1"

  settings {
    tier = "db-f1-micro"
    ip_configuration {
      require_ssl = true
      ipv4_enabled = true
    }
  }
}

See

terraform:S6404

Granting public access to GCP resources may reduce an organization’s ability to protect itself against attacks or theft of its GCP resources.
Security incidents associated with misuse of public access include disruption of critical functions, data theft, and additional costs due to resource overload.

To be as prepared as possible in the event of a security incident, authentication combined with fine-grained permissions helps maintain the principle of defense in depth and trace incidents back to the perpetrators.

GCP also provides the ability to grant access to a large group of people:

  • If public access is granted to all Google users, the impact of a data theft is the same as if public access is granted to all Internet users.
  • If access is granted to a large Google group, the impact of a data theft is limited based on the size of the group.

The only thing that changes in these cases is the ability to track user access in the event of an incident.

Ask Yourself Whether

  • This GCP resource is essential to the information system infrastructure.
  • This GCP resource is essential to mission-critical functions.
  • This GCP resource stores or processes sensitive data.
  • Compliance policies require that access to this resource be authenticated.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Explicitly set access to this resource or function as private.

Sensitive Code Example

For IAM resources:

resource "google_cloudfunctions_function_iam_binding" "example" {
  members = [
    "allUsers",              # Sensitive
    "allAuthenticatedUsers", # Sensitive
  ]
}

resource "google_cloudfunctions_function_iam_member" "example" {
  member = "allAuthenticatedUsers" # Sensitive
}

For ACL resources:

resource "google_storage_bucket_access_control" "example" {
  entity = "allUsers" # Sensitive
}

resource "google_storage_bucket_acl" "example" {
  role_entity = [
    "READER:allUsers",              # Sensitive
    "READER:allAuthenticatedUsers", # Sensitive
  ]
}

For container clusters:

resource "google_container_cluster" "example" {
  private_cluster_config {
    enable_private_nodes    = false # Sensitive
    enable_private_endpoint = false # Sensitive
  }
}

Compliant Solution

For IAM resources:

resource "google_cloudfunctions_function_iam_binding" "example" {
  members = [
    "serviceAccount:${google_service_account.example.email}",
    "group:${var.example_group}"
  ]
}

resource "google_cloudfunctions_function_iam_member" "example" {
  member = "user:${var.example_user}" # Sensitive
}

For ACL resources:

resource "google_storage_bucket_access_control" "example" {
  entity = "user-${var.example_user]"
}

resource "google_storage_bucket_acl" "example" {
  role_entity = [
    "READER:user-name@example.com",
    "READER:group-admins@example.com"
  ]
}

For container clusters:

resource "google_container_cluster" "example" {
  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = true
  }
}

See

terraform:S6245

This rule is deprecated, and will eventually be removed.

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"
}

Compliant Solution

Server-side encryption with Amazon S3-managed keys is used for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" {
  bucket = "example"

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

Server-side encryption with Amazon S3-managed keys is used for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
  bucket = aws_s3_bucket.example.bucket

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

See

terraform:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to deny all HTTP requests:

  • for all objects (*) of the bucket
  • for all principals (*)
  • for all actions (*)

Sensitive Code Example

No secure policy is attached to this bucket:

resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive
  bucket = "mynoncompliantbucketname"
}

A policy is defined but forces only HTTPs communication for some users:

resource "aws_s3_bucket" "mynoncompliantbucket" { # Sensitive
  bucket = "mynoncompliantbucketname"
}

resource "aws_s3_bucket_policy" "mynoncompliantbucketpolicy" {
  bucket = "mynoncompliantbucketname"

  policy = jsonencode({
    Version = "2012-10-17"
    Id      = "mynoncompliantbucketpolicy"
    Statement = [
      {
        Sid       = "HTTPSOnly"
        Effect    = "Deny"
        Principal = [
          "arn:aws:iam::123456789123:root"
        ] # secondary location: only one principal is forced to use https
        Action    = "s3:*"
        Resource = [
          aws_s3_bucket.mynoncompliantbucketpolicy.arn,
          "${aws_s3_bucket.mynoncompliantbucketpolicy.arn}/*",
        ]
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      },
    ]
  })
}

Compliant Solution

A secure policy that denies all HTTP requests is used:

resource "aws_s3_bucket" "mycompliantbucket" {
  bucket = "mycompliantbucketname"
}

resource "aws_s3_bucket_policy" "mycompliantpolicy" {
  bucket = "mycompliantbucketname"

  policy = jsonencode({
    Version = "2012-10-17"
    Id      = "mycompliantpolicy"
    Statement = [
      {
        Sid       = "HTTPSOnly"
        Effect    = "Deny"
        Principal = "*"
        Action    = "s3:*"
        Resource = [
          aws_s3_bucket.mycompliantbucket.arn,
          "${aws_s3_bucket.mycompliantbucket.arn}/*",
        ]
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      },
    ]
  })
}

See

terraform:S6329

Enabling public network access to cloud resources can affect an organization’s ability to protect its data or internal operations from data theft or disruption.

Depending on the component, inbound access from the Internet can be enabled via:

  • a boolean value that explicitly allows access to the public network.
  • the assignment of a public IP address.
  • database firewall rules that allow public IP ranges.

Deciding to allow public access may happen for various reasons such as for quick maintenance, time saving, or by accident.

This decision increases the likelihood of attacks on the organization, such as:

  • data breaches.
  • intrusions into the infrastructure to permanently steal from it.
  • and various malicious traffic, such as DDoS attacks.

Ask Yourself Whether

This cloud resource:

  • should be publicly accessible to any Internet user.
  • requires inbound traffic from the Internet to function properly.

There is a risk if you answered no to any of those questions.

Recommended Secure Coding Practices

Avoid publishing cloud services on the Internet unless they are intended to be publicly accessible, such as customer portals or e-commerce sites.

Use private networks (and associated private IP addresses) and VPC peering or other secure communication tunnels to communicate with other cloud components.

The goal is to prevent the component from intercepting traffic coming in via the public IP address. If the cloud resource does not support the absence of a public IP address, assign a public IP address to it, but do not create listeners for the public IP address.

Sensitive Code Example

For AWS:

resource "aws_instance" "example" {
  associate_public_ip_address = true # Sensitive
}
resource "aws_dms_replication_instance" "example" {
  publicly_accessible = true # Sensitive
}

For Azure:

resource "azurerm_postgresql_server" "example"  {
  public_network_access_enabled = true # Sensitive
}
resource "azurerm_postgresql_server" "example"  {
  public_network_access_enabled = true # Sensitive
}
resource "azurerm_kubernetes_cluster" "production" {
  api_server_authorized_ip_ranges = ["176.0.0.0/4"] # Sensitive
  default_node_pool {
    enable_node_public_ip = true # Sensitive
  }
}

For GCP:

resource "google_compute_instance" "example" {
  network_interface {
    network = "default"

    access_config {  # Sensitive
      # Ephemeral public IP
    }
  }

Compliant Solution

For AWS:

resource "aws_instance" "example" {
  associate_public_ip_address = false
}
resource "aws_dms_replication_instance" "example" {
  publicly_accessible          = false
}

For Azure:

resource "azurerm_postgresql_server" "example"  {
  public_network_access_enabled = false
}
resource "azurerm_kubernetes_cluster" "production" {
  api_server_authorized_ip_ranges = ["192.168.0.0/16"]
  default_node_pool {
    enable_node_public_ip = false
  }
}

For GCP:

resource "google_compute_instance" "example" {
  network_interface {
    network = google_compute_network.vpc_network_example.name
  }
}

Note that setting network="default" in the network interface block leads to other security problems such as removal of logging, Cloud VPN/VPC network peering, and the addition of insecure firewall rules.
A safer alternative includes creating a specific VPC or subnetwork and enforce security measures.

See

terraform:S6400

Granting highly privileged resource rights to users or groups can reduce an organization’s ability to protect against account or service theft. It prevents proper segregation of duties and creates potentially critical attack vectors on affected resources.

If elevated access rights are abused or compromised, both the data that the affected resources work with and their access tracking are at risk.

Ask Yourself Whether

  • This GCP resource is essential to the information system infrastructure.
  • This GCP resource is essential to mission-critical functions.
  • Compliance policies require that administrative privileges for this resource be limited to a small group of individuals.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Grant IAM policies or members a less permissive role: In most cases, granting them read-only privileges is sufficient.

Separate tasks by creating multiple roles that do not use a full access role for day-to-day work.

If the predefined GCP roles do not include the specific permissions you need, create custom IAM roles.

Sensitive Code Example

For an IAM policy setup:

data "google_iam_policy" "admin" {
  binding {
    role = "roles/run.admin" # Sensitive
    members = [
      "user:name@example.com",
    ]
  }
}

resource "google_cloud_run_service_iam_policy" "policy" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  policy_data = data.google_iam_policy.admin.policy_data
}

For an IAM policy binding:

resource "google_cloud_run_service_iam_binding" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/run.admin" # Sensitive
  members = [
    "user:name@example.com",
  ]
}

For adding a member to a policy:

resource "google_cloud_run_service_iam_member" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/run.admin" # Sensitive
  member = "user:name@example.com"
}

Compliant Solution

For an IAM policy setup:

data "google_iam_policy" "admin" {
  binding {
    role = "roles/viewer"
    members = [
      "user:name@example.com",
    ]
  }
}

resource "google_cloud_run_service_iam_policy" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  policy_data = data.google_iam_policy.admin.policy_data
}

For an IAM policy binding:

resource "google_cloud_run_service_iam_binding" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/viewer"
  members = [
    "user:name@example.com",
  ]
}

For adding a member to a policy:

resource "google_cloud_run_service_iam_member" "example" {
  location = google_cloud_run_service.default.location
  project = google_cloud_run_service.default.project
  service = google_cloud_run_service.default.name
  role = "roles/viewer"
  member = "user:name@example.com"
}

See

terraform:S6405

SSH keys stored and managed in a project’s metadata can be used to access GCP VM instances. By default, GCP automatically deploys project-level SSH keys to VM instances.

Project-level SSH keys can lead to unauthorized access because:

  • Their use prevents fine-grained VM-level access control and makes it difficult to follow the principle of least privilege.
  • Unlike managed access control with OS Login, manual cryptographic key management is error-prone and requires careful attention. For example, if a user leaves a project, their SSH keys should be removed from the metadata to prevent unwanted access.
  • If a project-level SSH key is compromised, all VM instances may be compromised.

Ask Yourself Whether

  • VM instances in a project have different security requirements.
  • Many users with different profiles need access to the VM instances in that project.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Block project-level SSH keys by setting the metadata.block-project-ssh-keys argument to true
  • Use OSLogin to benefit from managed access control.

Sensitive Code Example

resource "google_compute_instance" "example" { # Sensitive, because metadata.block-project-ssh-keys is not set to true
  name         = "example"
  machine_type = "e2-micro"
  zone         = "us-central1-a"

  network_interface {
    network = "default"

    access_config {
    }
  }
}

Compliant Solution

resource "google_compute_instance" "example" {
  name         = "example"
  machine_type = "e2-micro"
  zone         = "us-central1-a"

  metadata = {
    block-project-ssh-keys = true
  }

  network_interface {
    network = "default"

    access_config {
    }
  }
}

See

terraform:S6406

Excessive granting of GCP IAM permissions can allow attackers to exploit an organization’s cloud resources with malicious intent.

To prevent improper creation or deletion of resources after an account is compromised, proactive measures include both following GCP Security Insights and ensuring custom roles contain as few privileges as possible.

After gaining a foothold in the target infrastructure, sophisticated attacks typically consist of two major parts.
First, attackers must deploy new resources to carry out their malicious intent. To guard against this, operations teams must control what unexpectedly appears in the infrastructure, such as what is:

  • added
  • written down
  • updated
  • started
  • appended
  • applied
  • accessed.

Once the malicious intent is executed, attackers must avoid detection at all costs.
To counter attackers' attempts to remove their fingerprints, operations teams must control what unexpectedly disappears from the infrastructure, such as what is:

  • stopped
  • disabled
  • canceled
  • deleted
  • destroyed
  • detached
  • disconnected
  • suspended
  • rejected
  • removed.

For operations teams to be resilient in this scenario, their organization must apply both:

  • Detection security: log these actions to better detect malicious actions.
  • Preventive security: review and limit granted permissions.

This rule raises an issue when a custom role grants a number of sensitive permissions (read-write or destructive permission) that is greater than a given parameter.

Ask Yourself Whether

  • This custom role will be mostly used for read-only purposes.
  • Compliance policies require read-only access.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To reduce the risks associated with this role after a compromise:

  • Reduce the list of permissions to grant only those that are actually needed.
  • Favor read-only over read-write.

Sensitive Code Example

This custom role grants more than 5 sensitive permissions:

resource "google_project_iam_custom_role" "example" {
  permissions = [ # Sensitive
    "resourcemanager.projects.create", # Sensitive permission
    "resourcemanager.projects.delete", # Sensitive permission
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create", # Sensitive permission
    "run.services.delete", # Sensitive permission
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.setIamPolicy",  # Sensitive permission
    "run.services.list",
    "run.services.update",  # Sensitive permission
  ]
}

Compliant Solution

This custom role grants less than 5 sensitive permissions:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create",
    "run.services.delete",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
    "run.services.update",
  ]
}

See

terraform:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies to be set on a S3 bucket the following settings can be configured:

  • BlockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • IgnorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • BlockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • RestrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • BlockPublicAcls to true to block new attempts to set public ACLs.
  • IgnorePublicAcls to true to block existing public ACLs.
  • BlockPublicPolicy to true to block new attempts to set public policies.
  • RestrictPublicBuckets to true to restrict existing public policies.

Sensitive Code Example

By default, when not set, the aws_s3_bucket_public_access_block is fully deactivated (nothing is blocked):

resource "aws_s3_bucket" "example" { # Sensitive: no Public Access Block defined for this bucket
  bucket = "example"
}

This aws_s3_bucket_public_access_block allows public ACL to be set:

resource "aws_s3_bucket" "example" {  # Sensitive
  bucket = "examplename"
}

resource "aws_s3_bucket_public_access_block" "example-public-access-block" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = false # should be true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

Compliant Solution

This aws_s3_bucket_public_access_block blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_public_access_block" "example-public-access-block" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

See

terraform:S6321

Why is this an issue?

Cloud platforms such as AWS, Azure, or GCP support virtual firewalls that can be used to restrict access to services by controlling inbound and outbound traffic.
Any firewall rule allowing traffic from all IP addresses to standard network ports on which administration services traditionally listen, such as 22 for SSH, can expose these services to exploits and unauthorized access.

What is the potential impact?

Like any other service, administration services can contain vulnerabilities. Administration services run with elevated privileges and thus a vulnerability could have a high impact on the system.

Additionally, credentials might be leaked through phishing or similar techniques. Attackers who are able to reach the services could use the credentials to log in to the system.

How to fix it

It is recommended to restrict access to remote administration services to only trusted IP addresses. In practice, trusted IP addresses are those held by system administrators or those of bastion-like servers.

Code examples

Noncompliant code example

An ingress rule allowing all inbound SSH traffic for AWS:

resource "aws_security_group" "noncompliant" {
  name        = "allow_ssh_noncompliant"
  description = "allow_ssh_noncompliant"
  vpc_id      = aws_vpc.main.id

  ingress {
    description      = "SSH rule"
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]  # Noncompliant
  }
}

A security rule allowing all inbound SSH traffic for Azure:

resource "azurerm_network_security_rule" "noncompliant" {
  priority                    = 100
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "22"
  source_address_prefix       = "*"  # Noncompliant
  destination_address_prefix  = "*"
}

A firewall rule allowing all inbound SSH traffic for GCP:

resource "google_compute_firewall" "noncompliant" {
  network = google_compute_network.default.name

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = ["0.0.0.0/0"]  # Noncompliant
}

Compliant solution

An ingress rule allowing inbound SSH traffic from specific IP addresses for AWS:

resource "aws_security_group" "compliant" {
  name        = "allow_ssh_compliant"
  description = "allow_ssh_compliant"
  vpc_id      = aws_vpc.main.id

  ingress {
    description      = "SSH rule"
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = ["1.2.3.0/24"]
  }
}

A security rule allowing inbound SSH traffic from specific IP addresses for Azure:

resource "azurerm_network_security_rule" "compliant" {
  priority                    = 100
  direction                   = "Inbound"
  access                      = "Allow"
  protocol                    = "Tcp"
  source_port_range           = "*"
  destination_port_range      = "22"
  source_address_prefix       = "1.2.3.0"
  destination_address_prefix  = "*"
}

A firewall rule allowing inbound SSH traffic from specific IP addresses for GCP:

resource "google_compute_firewall" "compliant" {
  network = google_compute_network.default.name

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = ["10.0.0.1/32"]
}

Resources

Documentation

Standards

terraform:S6364

Reducing the backup retention duration can reduce an organization’s ability to re-establish service in case of a security incident.

Data backups allow to overcome corruption or unavailability of data by recovering as efficiently as possible from a security incident.

Backup retention duration, coverage, and backup locations are essential criteria regarding functional continuity.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be backed up for a specific amount of time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the backup retention period to an amount of time sufficient enough to be able to restore service in case of an incident.

Sensitive Code Example

For Amazon Relational Database Service clusters and instances:

resource "aws_db_instance" "example" {
  backup_retention_period = 2 # Sensitive
}

For Azure Cosmos DB accounts:

resource "azurerm_cosmosdb_account" "example" {
  backup {
    type = "Periodic"
    retention_in_hours = 8 # Sensitive
  }
}

Compliant Solution

For Amazon Relational Database Service clusters and instances:

resource "aws_db_instance" "example" {
  backup_retention_period = 5
}

For Azure Cosmos DB accounts:

resource "azurerm_cosmosdb_account" "example" {
  backup {
    type = "Periodic"
    retention_in_hours = 300
  }
}
terraform:S6401

The likelihood of security incidents increases when cryptographic keys are used for a long time. Thus, to strengthen the data protection it’s recommended to rotate the symmetric keys created with the Google Cloud Key Management Service (KMS) automatically and periodically. Note that it’s not possible in GCP KMS to rotate asymmetric keys automatically.

Ask Yourself Whether

  • The cryptographic key is a symmetric key.
  • The application requires compliance with some security standards like PCI-DSS.

Recommended Secure Coding Practices

It’s recommended to rotate keys automatically and regularly. The shorter the key period, the less data can be decrypted by an attacker if a key is compromised. So the key rotation period usually depends on the amount of data encrypted with a key or other requirements such as compliance with security standards. In general, a period of time of 90 days can be used.

Sensitive Code Example

resource "google_kms_crypto_key" "noncompliant-key" { # Sensitive: no rotation period is defined
  name            = "example"
  key_ring        = google_kms_key_ring.keyring.id
}

Compliant Solution

resource "google_kms_crypto_key" "compliant-key" {
  name            = "example"
  key_ring        = google_kms_key_ring.keyring.id
  rotation_period = "7776000s" # 90 days
}

See

terraform:S6402

Domain Name Systems (DNS) are vulnerable by default to various types of attacks.

One of the biggest risks is DNS cache poisoning, which occurs when a DNS accepts spoofed DNS data, caches the malicious records, and potentially sends them later in response to legitimate DNS request lookups. This attack typically relies on the attacker’s MITM ability on the network and can be used to redirect users from an intended website to a malicious website.

To prevent these vulnerabilities, Domain Name System Security Extensions (DNSSEC) ensure the integrity and authenticity of DNS data by digitally signing DNS zones.

The public key of a DNS zone used to validate signatures can be trusted as DNSSEC is based on the following chain of trust:

  • The parent DNS zone adds a "fingerprint" of the public key of the child zone in a "DS record".
  • The parent DNS zone signs it with its own private key.
  • And this process continues until the root zone.

Ask Yourself Whether

The parent DNS zone (likely managed by the DNS registrar of the domain name) supports DNSSEC and

  • The DNS zone is public (contains data such as public reachable IP addresses).

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to use DNSSEC when creating private and public DNS zones.

Private DNS zones cannot be queried on the Internet and provide DNS name resolution for private networks. The risk of MITM attacks might be considered low on these networks and therefore implementing DNSSEC is still recommended but not with a high priority.

Note: Choose a robust signing algorithm when setting up DNSSEC, such as rsasha256. The insecure rsasha1 algorithm should no longer be used.

Sensitive Code Example

resource "google_dns_managed_zone" "example" { # Sensitive: dnssec_config is missing
  name     = "foobar"
  dns_name = "foo.bar."
}

Compliant Solution

resource "google_dns_managed_zone" "example" {
  name     = "foobar"
  dns_name = "foo.bar."

  dnssec_config {
    default_key_specs {
      algorithm = "rsasha256"
    }
  }
}

See

terraform:S6407

App Engine supports encryption in transit through TLS. As soon as the app is deployed, it can be requested using appspot.com domains or custom domains. By default, endpoints accept both clear-text and encrypted traffic. When communication isn’t encrypted, there is a risk that an attacker could intercept it and read confidential information.

When creating an App Engine, request handlers can be set with different security level for encryption:

  • SECURE_NEVER: only HTTP requests are allowed (HTTPS requests are redirected to HTTP).
  • SECURE_OPTIONAL and SECURE_DEFAULT: both HTTP and HTTPS requests are allowed.
  • SECURE_ALWAYS: only HTTPS requests are allowed (HTTP requests are redirected to HTTPS).

Ask Yourself Whether

  • The handler serves confidential data in HTTP responses.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended for App Engine handlers to require TLS for all traffic. It can be achieved by setting the security level to SECURE_ALWAYS.

Sensitive Code Example

SECURE_DEFAULT, SECURE_NEVER and SECURE_OPTIONAL are sensitive TLS security level:

resource "google_app_engine_standard_app_version" "example" {
  version_id = "v1"
  service    = "default"
  runtime    = "nodejs"

  handlers {
    url_regex                   = ".*"
    redirect_http_response_code = "REDIRECT_HTTP_RESPONSE_CODE_301"
    security_level              = "SECURE_OPTIONAL" # Sensitive
    script {
      script_path = "auto"
    }
  }
}

Compliant Solution

Force the use of TLS for the handler by setting the security level on SECURE_ALWAYS:

resource "google_app_engine_standard_app_version" "example" {
  version_id = "v1"
  service    = "default"
  runtime    = "nodejs"

  handlers {
    url_regex                   = ".*"
    redirect_http_response_code = "REDIRECT_HTTP_RESPONSE_CODE_301"
    security_level              = "SECURE_ALWAYS"
    script {
      script_path = "auto"
    }
  }
}

See

terraform:S6408

Creating custom roles that allow privilege escalation can allow attackers to maliciously exploit an organization’s cloud resources.

Certain GCP permissions allow impersonation of one or more privileged principals within a GCP infrastructure.
To prevent privilege escalation after an account has been compromised, proactively follow GCP Security Insights and ensure that custom roles contain as few privileges as possible that allow direct or indirect impersonation.

For example, privileges like deploymentmanager.deployments.create allow impersonation of service accounts, even if the name does not sound like it.
Other privileges like setIamPolicy, which are more explicit, directly allow their holder to extend their privileges.

After gaining a foothold in the target infrastructure, sophisticated attackers typically map their newfound roles to understand what is exploitable.

The riskiest privileges are either:

  • At the infrastructure level: privileges to perform project, folder, or organization-wide administrative tasks.
  • At the resource level: privileges to perform resource-wide administrative tasks.

In either case, the following privileges should be avoided or granted only with caution:

  • ..setIamPolicy
  • cloudbuilds.builds.create
  • cloudfunctions.functions.create
  • cloudfunctions.functions.update
  • cloudscheduler.jobs.create
  • composer.environments.create
  • compute.instances.create
  • dataflow.jobs.create
  • dataproc.clusters.create
  • deploymentmanager.deployments.create
  • iam.roles.update
  • iam.serviceAccountKeys.create
  • iam.serviceAccounts.actAs
  • iam.serviceAccounts.getAccessToken
  • iam.serviceAccounts.getOpenIdToken
  • iam.serviceAccounts.implicitDelegation
  • iam.serviceAccounts.signBlob
  • iam.serviceAccounts.signJwt
  • orgpolicy.policy.set
  • run.services.create
  • serviceusage.apiKeys.create
  • serviceusage.apiKeys.list
  • storage.hmacKeys.create

Ask Yourself Whether

  • This role requires impersonation to perform specific tasks with different privileges.
  • This custom role is intended for a small group of administrators.

There is a risk if you answered no to these questions.

Recommended Secure Coding Practices

Use a permission that does not allow privilege escalation.

Sensitive Code Example

Lightweight custom role intended for a developer:

resource "google_organization_iam_custom_role" "example" {
  permissions = [
    "iam.serviceAccounts.getAccessToken",     # Sensitive
    "iam.serviceAccounts.getOpenIdToken",     # Sensitive
    "iam.serviceAccounts.actAs",              # Sensitive
    "iam.serviceAccounts.implicitDelegation", # Sensitive
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create",
    "run.services.delete",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
    "run.services.update",
  ]
}

Lightweight custom role intended for a read-only user:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "iam.serviceAccountKeys.create",        # Sensitive
    "iam.serviceAccountKeys.get",           # Sensitive
    "deploymentmanager.deployments.create", # Sensitive
    "cloudbuild.builds.create",             # Sensitive
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
  ]
}

Compliant Solution

Lightweight custom role intended for a developer:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.create",
    "run.services.delete",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
    "run.services.update",
  ]
}

Lightweight custom role intended for a read-only user:

resource "google_project_iam_custom_role" "example" {
  permissions = [
    "resourcemanager.projects.get",
    "resourcemanager.projects.list",
    "run.services.get",
    "run.services.getIamPolicy",
    "run.services.list",
  ]
}

See

terraform:S6409

Enabling Legacy Authorization, Attribute-Based Access Control (ABAC), on Google Kubernetes Engine resources can reduce an organization’s ability to protect itself against access controls being compromised.

For Kubernetes, Attribute-Based Access Control has been superseded by Role-Based Access Control. ABAC is not under active development anymore and thus should be avoided.

Ask Yourself Whether

  • This resource is essential for the information system infrastructure.
  • This resource is essential for mission-critical functions.
  • Compliance policies require access to this resource to be enforced through the use of Role-Based Access Control.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Unless you are relying on ABAC, leave it disabled.

Sensitive Code Example

For Google Kubernetes Engine:

resource "google_container_cluster" "example" {
  enable_legacy_abac = true # Sensitive
}

Compliant Solution

For Google Kubernetes Engine:

resource "google_container_cluster" "example" {
  enable_legacy_abac = false
}

See

terraform:S6414

The Google Cloud audit logs service records administrative activities and accesses to Google Cloud resources of the project. It is important to enable audit logs to be able to investigate malicious activities in the event of a security incident.

Some project members may be exempted from having their activities recorded in the Google Cloud audit log service, creating a blind spot and reducing the capacity to investigate future security events.

Ask Yourself Whether

  • The members exempted from having their activity logged have high privileges.
  • Compliance rules require that audit log should be activated for all members.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to have a consistent audit logging policy for all project members and therefore not to create logging exemptions for certain members.

Sensitive Code Example

resource "google_project_iam_audit_config" "example" {
  project = data.google_project.project.id
  service = "allServices"
  audit_log_config {
    log_type = "ADMIN_READ"
    exempted_members = [ # Sensitive
      "user:rogue.administrator@gmail.com",
    ]
  }
}

Compliant Solution

resource "google_project_iam_audit_config" "example" {
  project = data.google_project.project.id
  service = "allServices"
  audit_log_config {
    log_type = "ADMIN_READ"
  }
}

See

terraform:S6252

S3 buckets can be in three states related to versioning:

  • unversioned (default one)
  • enabled
  • suspended

When the S3 bucket is unversioned or has versioning suspended it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

Versioning is disabled by default:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"
}

Compliant Solution

Versioning is enabled for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_versioning" "example-versioning" {
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
  }
}

Versioning is enabled for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" {
  bucket = "example"

  versioning {
    enabled = true
  }
}

See

terraform:S6258

Disabling logging of this component can lead to missing traceability in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require this component to be monitored.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable the logging capabilities of this component. Depending on the component, new permissions might be required by the logging storage components.
You should consult the official documentation to enable logging for the impacted components. For example, AWS Application Load Balancer Access Logs require an additional bucket policy.

Sensitive Code Example

For Amazon S3 access requests:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"
}

For Amazon API Gateway stages:

resource "aws_api_gateway_stage" "example" { # Sensitive
  xray_tracing_enabled = false # Sensitive
}

For Amazon MSK Broker logs:

resource "aws_msk_cluster" "example" {
  cluster_name           = "example"
  kafka_version          = "2.7.1"
  number_of_broker_nodes = 3

  logging_info {
    broker_logs { # Sensitive
      firehose {
        enabled = false
      }
      s3 {
        enabled = false
      }
    }
  }
}

For Amazon MQ Brokers:

resource "aws_mq_broker" "example" {
  logs {  # Sensitive
    audit   = false
    general = false
  }
}

For Amazon Amazon DocumentDB:

resource "aws_docdb_cluster" "example" { # Sensitive
  cluster_identifier = "example"
}

For Azure App Services:

resource "azurerm_app_service" "example" {
  logs {
    application_logs {
      file_system_level = "Off" # Sensitive
      azure_blob_storage {
        level = "Off"           # Sensitive
      }
    }
  }
}

For GCP VPC Subnetwork:

resource "google_compute_subnetwork" "example" { # Sensitive
  name          = "example"
  ip_cidr_range = "10.2.0.0/16"
  region        = "us-central1"
  network       = google_compute_network.example.id
}

For GCP SQL Database Instance:

resource "google_sql_database_instance" "example" {
  name = "example"

  settings { # Sensitive
    tier = "db-f1-micro"
    ip_configuration {
      require_ssl  = true
      ipv4_enabled = true
    }
  }
}

For GCP Kubernetes Engine (GKE) cluster:

resource "google_container_cluster" "example" {
  name               = "example"
  logging_service    = "none" # Sensitive
}

Compliant Solution

For Amazon S3 access requests:

resource "aws_s3_bucket" "example-logs" {
  bucket = "example_logstorage"
  acl    = "log-delivery-write"
}

resource "aws_s3_bucket" "example" {
  bucket = "example"

  logging { # AWS provider <= 3
      target_bucket = aws_s3_bucket.example-logs.id
      target_prefix = "log/example"
  }
}

resource "aws_s3_bucket_logging" "example" { # AWS provider >= 4
  bucket = aws_s3_bucket.example.id

  target_bucket = aws_s3_bucket.example-logs.id
  target_prefix = "log/example"
}

For Amazon API Gateway stages:

resource "aws_api_gateway_stage" "example" {
  xray_tracing_enabled = true

  access_log_settings {
    destination_arn = "arn:aws:logs:eu-west-1:123456789:example"
    format = "..."
  }
}

For Amazon MSK Broker logs:

resource "aws_msk_cluster" "example" {
  cluster_name           = "example"
  kafka_version          = "2.7.1"
  number_of_broker_nodes = 3

  logging_info {
    broker_logs {
      firehose   {
        enabled = false
      }
      s3 {
        enabled = true
        bucket  = "example"
        prefix  = "log/msk-"
      }
    }
  }
}

For Amazon MQ Brokers, enable audit or general:

resource "aws_mq_broker" "example" {
  logs {
    audit   = true
    general = true
  }
}

For Amazon Amazon DocumentDB:

resource "aws_docdb_cluster" "example" {
  cluster_identifier              = "example"
  enabled_cloudwatch_logs_exports = ["audit"]
}

For Azure App Services:

resource "azurerm_app_service" "example" {
 logs {
    http_logs {
      file_system {
        retention_in_days = 90
        retention_in_mb   = 100
      }
    }

 application_logs {
      file_system_level = "Error"
      azure_blob_storage {
        retention_in_days = 90
        level             = "Error"
      }
    }
  }
}

For GCP VPC Subnetwork:

resource "google_compute_subnetwork" "example" {
  name          = "example"
  ip_cidr_range = "10.2.0.0/16"
  region        = "us-central1"
  network       = google_compute_network.example.id

  log_config {
    aggregation_interval = "INTERVAL_10_MIN"
    flow_sampling        = 0.5
    metadata             = "INCLUDE_ALL_METADATA"
  }
}

For GCP SQL Database Instance:

resource "google_sql_database_instance" "example" {
  name             = "example"

  settings {
    ip_configuration {
      require_ssl  = true
      ipv4_enabled = true
    }
    database_flags {
      name  = "log_connections"
      value = "on"
    }
    database_flags {
      name  = "log_disconnections"
      value = "on"
    }
    database_flags {
      name  = "log_checkpoints"
      value = "on"
    }
    database_flags {
      name  = "log_lock_waits"
      value = "on"
    }
  }
}

For GCP Kubernetes Engine (GKE) cluster:

resource "google_container_cluster" "example" {
  name               = "example"
  logging_service    = "logging.googleapis.com/kubernetes"
}

See

terraform:S6330

Amazon Simple Queue Service (SQS) is a managed message queuing service for application-to-application (A2A) communication. Amazon SQS can store messages encrypted as soon as they are received. In the case that adversaries gain physical access to the storage medium or otherwise leak a message from the file system, for example through a vulnerability in the service, they are not able to access the data.

Ask Yourself Whether

  • The queue contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SQS queues that contain sensitive information. Encryption and decryption are handled transparently by SQS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_sqs_queue:

resource "aws_sqs_queue" "queue" {  # Sensitive, encryption disabled by default
  name = "sqs-unencrypted"
}

Compliant Solution

For aws_sqs_queue:

resource "aws_sqs_queue" "queue" {
  name = "sqs-encrypted"
  kms_master_key_id = aws_kms_key.enc_key.key_id
}

See

terraform:S6333

Creating APIs without authentication unnecessarily increases the attack surface on the target infrastructure.

Unless another authentication method is used, attackers have the opportunity to attempt attacks against the underlying API.
This means attacks both on the functionality provided by the API and its infrastructure.

Ask Yourself Whether

  • The underlying API exposes all of its contents to any anonymous Internet user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

In general, prefer limiting API access to a specific set of people or entities.

AWS provides multiple methods to do so:

  • AWS_IAM, to use standard AWS IAM roles and policies.
  • COGNITO_USER_POOLS, to use customizable OpenID Connect (OIDC) identity providers (IdP).
  • CUSTOM, to use an AWS-independant OIDC provider, glued to the infrastructure with a Lambda authorizer.

Sensitive Code Example

A public API that doesn’t have access control implemented:

resource "aws_api_gateway_method" "noncompliantapi" {
  authorization = "NONE" # Sensitive
  http_method   = "GET"
}

Compliant Solution

An API that implements AWS IAM permissions:

resource "aws_api_gateway_method" "compliantapi" {
  authorization = "AWS_IAM"
  http_method   = "GET"
}

See

terraform:S6378

Disabling Managed Identities can reduce an organization’s ability to protect itself against configuration faults and credential leaks.

Authenticating via managed identities to an Azure resource solely relies on an API call with a non-secret token. The process is inner to Azure: secrets used by Azure are not even accessible to end-users.

In typical scenarios without managed identities, the use of credentials can lead to mistakenly leaving them in code bases. In addition, configuration faults may also happen when storing these values or assigning them permissions.

By transparently taking care of the Azure Active Directory authentication, Managed Identities allow getting rid of day-to-day credentials management.

Ask Yourself Whether

The resource:

  • Needs to authenticate to Azure resources that support Azure Active Directory (AAD).
  • Uses a different Access Control system that doesn’t guarantee the same security controls as AAD, or no Access Control system at all.

There is a risk if you answered yes to all of those questions.

Recommended Secure Coding Practices

Enable the Managed Identities capabilities of this Azure resource. If supported, use a System-Assigned managed identity, as:

  • It cannot be shared across resources.
  • Its life cycle is deeply tied to the life cycle of its Azure resource.
  • It provides a unique independent identity.

Alternatively, User-Assigned Managed Identities can also be used but don’t guarantee the properties listed above.

Sensitive Code Example

For Typical identity blocks:

resource "azurerm_api_management" "example" { # Sensitive, the identity block is missing
  name           = "example"
  publisher_name = "company"
}

For connections between Kusto Clusters and Azure Data Factory:

resource "azurerm_data_factory_linked_service_kusto" "example" {
  name                 = "example"
  use_managed_identity = false # Sensitive
}

Compliant Solution

For Typical identity blocks:

resource "azurerm_api_management" "example" {
  name           = "example"
  publisher_name = "company"

  identity {
    type = "SystemAssigned"
  }
}

For connections between Kusto Clusters and Azure Data Factory:

resource "azurerm_data_factory_linked_service_kusto" "example" {
  name                 = "example"
  use_managed_identity = true
}

See

terraform:S6379

Enabling Azure resource-specific admin accounts can reduce an organization’s ability to protect itself against account or service account thefts.

Full Administrator permissions fail to correctly separate duties and create potentially critical attack vectors on the impacted resources.

In case of abuse of elevated permissions, both the data on which impacted resources operate and their access traceability are at risk.

Ask Yourself Whether

  • This Azure resource is essential for the information system infrastructure.
  • This Azure resource is essential for mission-critical functions.
  • Compliance policies require this resource to disable its administrative accounts or permissions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Disable the administrative accounts or permissions in this Azure resource.

Sensitive Code Example

For Azure Batch Pools:

resource "azurerm_batch_pool" "example" {
  name = "sensitive"

  start_task {
    user_identity {
      auto_user {
        elevation_level = "Admin" # Sensitive
        scope = "Task"
      }
    }
  }
}

For Azure Container Registries:

resource "azurerm_container_registry" "example" {
  name = "example"
  admin_enabled = true # Sensitive
}

Compliant Solution

For Azure Batch Pools:

resource "azurerm_batch_pool" "example" {
  name = "example"

  start_task {
    user_identity {
      auto_user {
        elevation_level = "NonAdmin"
        scope = "Task"
      }
    }
  }
}

For Azure Container Registries:

resource "azurerm_container_registry" "exemple" {
  name = "example"
  admin_enabled = false
}

See

terraform:S6410

The TLS configuration of Google Cloud load balancers is defined through SSL policies.

Why is this an issue?

There are three managed profiles to choose from: COMPATIBLE (default), MODERN and RESTRICTED:

  • The RESTRICTED profile supports a reduced set of cryptographic algorithms, intended to meet stricter compliance requirements.
  • The MODERN profile supports a wider set of cryptographic algorithms, allowing most modern clients to negotiate TLS.
  • The COMPATIBLE profile supports the widest set of cryptographic algorithms, allowing connections from older client applications.

The MODERN and COMPATIBLE profiles allow the use of older cryptographic algorithms that are no longer considered secure and are susceptible to attack.

What is the potential impact?

An attacker may be able to force the use of the insecure cryptographic algorithms, downgrading the security of the connection. This allows them to compromise the confidentiality or integrity of the data being transmitted.

The MODERN profile allows the use of the insecure SHA-1 signing algorithm. An attacker is able to generate forged data that passes a signature check, appearing to be legitimate data.

The COMPATIBLE profile additionally allows the user of key exchange algorithms that do not support forward secrecy as a feature. If the server’s private key is leaked, it can be used to decrypt all network traffic sent to and from that server.

How to fix it

Code examples

Noncompliant code example

resource "google_compute_ssl_policy" "example" {
  name            = "example"
  min_tls_version = "TLS_1_2"
  profile         = "COMPATIBLE" # Noncompliant
}

Compliant solution

resource "google_compute_ssl_policy" "example" {
  name            = "example"
  min_tls_version = "TLS_1_2"
  profile         = "RESTRICTED"
}

How does this work?

If an attacker is able to intercept and modify network traffic, they can filter the list of algorithms sent between the client and the server. By removing all secure algorithms from the list, the attacker can force the use of any insecure algorithms that remain.

The RESTRICTED profile only allows strong cryptographic algorithms to be used. There are no insecure algorithms that can compromise the security of the connection.

Pitfalls

Older client applications may not support the algorithms required by the RESTRICTED profile. These applications will no longer be able to connect.

If the MODERN or COMPATIBLE profiles must be used so that older clients can connect, consider using additional measures such as TLS client certificates or IP allow-lists to improve security.

Resources

Standards

External coding guidelines

terraform:S6412

When object versioning for Google Cloud Storage (GCS) buckets is enabled, different versions of an object are stored in the bucket, preventing accidental deletion. A specific version can always be deleted when the generation number of an object version is specified in the request.

Object versioning cannot be enabled on a bucket with a retention policy. A retention policy ensures that an object is retained for a specific period of time even if a request is made to delete or replace it. Thus, a retention policy locks the single current version of an object in the bucket, which differs from object versioning where different versions of an object are retained.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to enable GCS bucket versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

Versioning is disabled by default:

resource "google_storage_bucket" "example" { # Sensitive
  name          = "example"
  location      = "US"
}

Compliant Solution

Versioning is enabled:

resource "google_storage_bucket" "example" {
  name          = "example"
  location      = "US"

  versioning {
    enabled = "true"
  }
}

See

terraform:S6413

Defining a short log retention duration can reduce an organization’s ability to backtrace the actions of malicious actors in case of a security incident.

Logging allows operational and security teams to get detailed and real-time feedback on an information system’s events. The logging coverage enables them to quickly react to events, ranging from the most benign bugs to the most impactful security incidents, such as intrusions.

Apart from security detection, logging capabilities also directly influence future digital forensic analyses. For example, detailed logging will allow investigators to establish a timeline of the actions perpetrated by an attacker.

Ask Yourself Whether

  • This component is essential for the information system infrastructure.
  • This component is essential for mission-critical functions.
  • Compliance policies require traceability for a longer duration.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Increase the log retention period to an amount of time sufficient enough to be able to investigate and restore service in case of an incident.

Sensitive Code Example

For AWS Cloudwatch Logs:

resource "aws_cloudwatch_log_group" "example" {
  name = "example"
  retention_in_days = 3 # Sensitive
}

For Azure Firewall Policy:

resource "azurerm_firewall_policy" "example" {
  insights {
    enabled = true
    retention_in_days = 7 # Sensitive
  }
}

For Google Cloud Logging buckets:

resource "google_logging_project_bucket_config" "example" {
    project = var.project
    location = "global"
    retention_days = 7 # Sensitive
    bucket_id = "_Default"
}

Compliant Solution

For AWS Cloudwatch Logs:

resource "aws_cloudwatch_log_group" "example" {
  name = "example"
  retention_in_days = 30
}

For Azure Firewall Policy:

resource "azurerm_firewall_policy" "example" {
  insights {
    enabled = true
    retention_in_days = 30
  }
}

For Google Cloud Logging buckets:

resource "google_logging_project_bucket_config" "example" {
    project = var.project
    location = "global"
    retention_days = 30
    bucket_id = "_Default"
}
terraform:S6255

When S3 buckets versioning is enabled it’s possible to add an additional authentication factor before being allowed to delete versions of an object or changing the versioning state of a bucket. It prevents accidental object deletion by forcing the user sending the delete request to prove that he has a valid MFA device and a corresponding valid token.

Ask Yourself Whether

  • The S3 bucket stores sensitive information that is required to be preserved on the long term.
  • The S3 bucket grants delete permission to many users.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 MFA delete, note that:

  • MFA delete can only be enabled with the AWS CLI or API and with the root account.
  • To delete an object version, the API should be used with the x-amz-mfa header.
  • The API request, with the x-amz-mfa header, can only be used in HTTPS.

Sensitive Code Example

A versioned S3 bucket does not have MFA delete enabled for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" { # Sensitive
  bucket = "example"

  versioning {
    enabled = true
  }
}

A versioned S3 bucket does not have MFA delete enabled for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_versioning" "example" { # Sensitive
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
  }
}

Compliant Solution

MFA delete is enabled for AWS provider version 3 or below:

resource "aws_s3_bucket" "example" {
  bucket = "example"

  versioning {
    enabled = true
    mfa_delete = true
  }
}

MFA delete is enabled for AWS provider version 4 or above:

resource "aws_s3_bucket" "example" {
  bucket = "example"
}

resource "aws_s3_bucket_versioning" "example" {
  bucket = aws_s3_bucket.example.id
  versioning_configuration {
    status = "Enabled"
    mfa_delete = "Enabled"
  }
  mfa = "${var.MFA}"
}

See

terraform:S6332

Amazon Elastic File System (EFS) is a serverless file system that does not require provisioning or managing storage. Stored files can be automatically encrypted by the service. In the case that adversaries gain physical access to the storage medium or otherwise leak a message they are not able to access the data.

Ask Yourself Whether

  • The file system contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EFS file systems that contain sensitive information. Encryption and decryption are handled transparently by EFS, so no further modifications to the application are necessary.

Sensitive Code Example

For aws_efs_file_system:

resource "aws_efs_file_system" "fs" {  # Sensitive, encryption disabled by default
}

Compliant Solution

For aws_efs_file_system:

resource "aws_efs_file_system" "fs" {
  encrypted = true
}

See

terraform:S6375

Azure Active Directory offers built-in roles that can be assigned to users, groups, or service principals. Some of these roles should be carefully assigned as they grant sensitive permissions like the ability to reset passwords for all users.

An Azure account that fails to limit the use of such roles has a higher risk of being breached by a compromised owner.

This rule raises an issue when one of the following roles is assigned:

  • Application Administrator
  • Authentication Administrator
  • Cloud Application Administrator
  • Global Administrator
  • Groups Administrator
  • Helpdesk Administrator
  • Password Administrator
  • Privileged Authentication Administrator
  • Privileged Role Administrator
  • User Administrator

Ask Yourself Whether

  • The user, group, or service principal doesn’t use the entirety of this extensive set of permissions to operate on a day-to-day basis.
  • It is possible to follow the Separation of Duties principle and split permissions between multiple users, but it’s not enforced.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

  • Limit the assignment of Global Administrator roles to less than five people or service principals.
  • Apply the least privilege principle by choosing a role with a limited set of permissions.
  • If no built-in role meets your needs, create a custom role with as few permissions as possible.

Sensitive Code Example

resource "azuread_directory_role" "example" {
  display_name = "Privileged Role Administrator" # Sensitive
}

resource "azuread_directory_role_member" "example" {
  role_object_id   = azuread_directory_role.example.object_id
  member_object_id = data.azuread_user.example.object_id
}

Compliant Solution

resource "azuread_directory_role" "example" {
  display_name = "Usage Summary Reports Reader"
}

resource "azuread_directory_role_member" "example" {
  role_object_id   = azuread_directory_role.example.object_id
  member_object_id = data.azuread_user.example.object_id
}

See

php:S2115

When accessing a database, an empty password should be avoided as it introduces a weakness.

Why is this an issue?

When a database does not require a password for authentication, it allows anyone to access and manipulate the data stored within it. Exploiting this vulnerability typically involves identifying the target database and establishing a connection to it without the need for any authentication credentials.

What is the potential impact?

Once connected, an attacker can perform various malicious actions, such as viewing, modifying, or deleting sensitive information, potentially leading to data breaches or unauthorized access to critical systems. It is crucial to address this vulnerability promptly to ensure the security and integrity of the database and the data it contains.

Unauthorized Access to Sensitive Data

When a database lacks a password for authentication, it opens the door for unauthorized individuals to gain access to sensitive data. This can include personally identifiable information (PII), financial records, intellectual property, or any other confidential information stored in the database. Without proper access controls in place, malicious actors can exploit this vulnerability to retrieve sensitive data, potentially leading to identity theft, financial loss, or reputational damage.

Compromise of System Integrity

Without a password requirement, unauthorized individuals can gain unrestricted access to a database, potentially compromising the integrity of the entire system. Attackers can inject malicious code, alter configurations, or manipulate data within the database, leading to system malfunctions, unauthorized system access, or even complete system compromise. This can disrupt business operations, cause financial losses, and expose the organization to further security risks.

Unwanted Modifications or Deletions

The absence of a password for database access allows anyone to make modifications or deletions to the data stored within it. This poses a significant risk, as unauthorized changes can lead to data corruption, loss of critical information, or the introduction of malicious content. For example, an attacker could modify financial records, tamper with customer orders, or delete important files, causing severe disruptions to business processes and potentially leading to financial and legal consequences.

Overall, the lack of a password configured to access a database poses a serious security risk, enabling unauthorized access, data breaches, system compromise, and unwanted modifications or deletions. It is essential to address this vulnerability promptly to safeguard sensitive data, maintain system integrity, and protect the organization from potential harm.

How to fix it in Core PHP

Code examples

The following code uses an empty password to connect to a MySQL database.

The vulnerability can be fixed by using a strong password retrieved from an environment variable MYSQL_SECURE_PASSWORD. This environment variable is set during deployment. It should be strong and different for each database.

Noncompliant code example

$conn = new mysqli($servername, $username, ""); // Noncompliant

Compliant solution

$password = getenv('MYSQL_SECURE_PASSWORD');
$conn = new mysqli($servername, $username, $password);

Pitfalls

Hard-coded passwords

It could be tempting to replace the empty password with a hard-coded one. Hard-coding passwords in the code can pose significant security risks. Here are a few reasons why it is not recommended:

  1. Security Vulnerability: Hard-coded passwords can be easily discovered by anyone who has access to the code, such as other developers or attackers. This can lead to unauthorized access to the database and potential data breaches.
  2. Lack of Flexibility: Hard-coded passwords make it difficult to change the password without modifying the code. If the password needs to be updated, it would require recompiling and redeploying the code, which can be time-consuming and error-prone.
  3. Version Control Issues: Storing passwords in code can lead to version control issues. If the code is shared or stored in a version control system, the password will be visible to anyone with access to the repository, which is a security risk.

To mitigate these risks, it is recommended to use secure methods for storing and retrieving passwords, such as using environment variables, configuration files, or secure key management systems. These methods allow for better security, flexibility, and separation of sensitive information from the codebase.

Resources

Standards

php:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

For Laravel VerifyCsrfToken middleware

use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware;

class VerifyCsrfToken extends Middleware
{
    protected $except = [
        'api/*'
    ]; // Sensitive; disable CSRF protection for a list of routes
}

For Symfony Forms

use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;

class Controller extends AbstractController {

  public function action() {
    $this->createForm('', null, [
      'csrf_protection' => false, // Sensitive; disable CSRF protection for a single form
    ]);
  }
}

Compliant Solution

For Laravel VerifyCsrfToken middleware

use Illuminate\Foundation\Http\Middleware\VerifyCsrfToken as Middleware;

class VerifyCsrfToken extends Middleware
{
    protected $except = []; // Compliant
}

Remember to add @csrf blade directive to the relevant forms when removing an element from $except. Otherwise the form submission will stop working.

For Symfony Forms

use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;

class Controller extends AbstractController {

  public function action() {
    $this->createForm('', null, []); // Compliant; CSRF protection is enabled by default
  }
}

See

php:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

CakePHP 1.x, 2.x:

Configure::write('debug', 1); // Sensitive: development mode
or
Configure::write('debug', 2); // Sensitive: development mode
or
Configure::write('debug', 3); // Sensitive: development mode

CakePHP 3.0:

use Cake\Core\Configure;

Configure::config('debug', true); // Sensitive: development mode

WordPress:

define( 'WP_DEBUG', true ); // Sensitive: development mode

Compliant Solution

CakePHP 1.2:

Configure::write('debug', 0); // Compliant; this is the production mode

CakePHP 3.0:

use Cake\Core\Configure;

Configure::config('debug', false); // Compliant:  "0" or "false" for CakePHP 3.x is suitable (production mode) to not leak sensitive data on the logs.

WordPress:

define( 'WP_DEBUG', false ); // Compliant

See

php:S4508

This rule is deprecated, and will eventually be removed.

Deserializing objects is security-sensitive. For example, it has led in the past to the following vulnerabilities:

Object deserialization from an untrusted source can lead to unexpected code execution. Deserialization takes a stream of bits and turns it into an object. If the stream contains the type of object you expect, all is well. But if you’re deserializing data coming from untrusted input, and an attacker has inserted some other type of object, you’re in trouble. Why? A known attack scenario involves the creation of a serialized PHP object with crafted attributes which will modify your application’s behavior. This attack relies on PHP magic methods like __desctruct, __wakeup or __string. The attacker doesn’t necessarily need the source code of the targeted application to exploit the vulnerability, he can also rely on the presence of open-source component and use tools to craft malicious payloads.

Ask Yourself Whether

  • an attacker could have tampered with the source provided to the deserialization function
  • you are using an unsafe deserialization function

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To prevent insecure deserialization, it is recommended to:

  • Use safe libraries that do not allow code execution at deserialization.
  • Not communicate with the outside world using serialized objects
  • Limit access to the serialized source
    • if it is a file, restrict the access to it.
    • if it comes from the network, restrict who has access to the process, such as with a Firewall or by authenticating the sender first.

See

php:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For ZipArchive module:

$zip = new ZipArchive();
if ($zip->open($file) === true) {
    $zip->extractTo('.'); // Sensitive
    $zip->close();
}

For Zip module:

$zip = zip_open($file);
while ($file = zip_read($zip)) {
    $filename = zip_entry_name($file);
    $size = zip_entry_filesize($file);

    if (substr($filename, -1) !== '/') {
        $content = zip_entry_read($file, zip_entry_filesize($file)); // Sensitive - zip_entry_read() uses zip_entry_filesize()
        file_put_contents($filename, $content);
    } else {
        mkdir($filename);
    }
}
zip_close($zip);

Compliant Solution

For ZipArchive module:

define('MAX_FILES', 10000);
define('MAX_SIZE', 1000000000); // 1 GB
define('MAX_RATIO', 10);
define('READ_LENGTH', 1024);

$fileCount = 0;
$totalSize = 0;

$zip = new ZipArchive();
if ($zip->open($file) === true) {
    for ($i = 0; $i < $zip->numFiles; $i++) {
        $filename = $zip->getNameIndex($i);
        $stats = $zip->statIndex($i);

        if (strpos($filename, '../') !== false || substr($filename, 0, 1) === '/') {
            throw new Exception();
        }

        if (substr($filename, -1) !== '/') {
            $fileCount++;
            if ($fileCount > MAX_FILES) {
                // Reached max. number of files
                throw new Exception();
            }

            $fp = $zip->getStream($filename); // Compliant
            $currentSize = 0;
            while (!feof($fp)) {
                $currentSize += READ_LENGTH;
                $totalSize += READ_LENGTH;

                if ($totalSize > MAX_SIZE) {
                    // Reached max. size
                    throw new Exception();
                }

                // Additional protection: check compression ratio
                if ($stats['comp_size'] > 0) {
                    $ratio = $currentSize / $stats['comp_size'];
                    if ($ratio > MAX_RATIO) {
                        // Reached max. compression ratio
                        throw new Exception();
                    }
                }

                file_put_contents($filename, fread($fp, READ_LENGTH), FILE_APPEND);
            }

            fclose($fp);
        } else {
            mkdir($filename);
        }
    }
    $zip->close();
}

For Zip module:

define('MAX_FILES', 10000);
define('MAX_SIZE', 1000000000); // 1 GB
define('MAX_RATIO', 10);
define('READ_LENGTH', 1024);

$fileCount = 0;
$totalSize = 0;

$zip = zip_open($file);
while ($file = zip_read($zip)) {
    $filename = zip_entry_name($file);

    if (strpos($filename, '../') !== false || substr($filename, 0, 1) === '/') {
        throw new Exception();
    }

    if (substr($filename, -1) !== '/') {
        $fileCount++;
        if ($fileCount > MAX_FILES) {
            // Reached max. number of files
            throw new Exception();
        }

        $currentSize = 0;
        while ($data = zip_entry_read($file, READ_LENGTH)) { // Compliant
            $currentSize += READ_LENGTH;
            $totalSize += READ_LENGTH;

            if ($totalSize > MAX_SIZE) {
                // Reached max. size
                throw new Exception();
            }

            // Additional protection: check compression ratio
            if (zip_entry_compressedsize($file) > 0) {
                $ratio = $currentSize / zip_entry_compressedsize($file);
                if ($ratio > MAX_RATIO) {
                    // Reached max. compression ratio
                    throw new Exception();
                }
            }

            file_put_contents($filename, $data, FILE_APPEND);
        }
    } else {
        mkdir($filename);
    }
}
zip_close($zip);

See

php:S2277

This rule is deprecated; use S5542 instead.

Why is this an issue?

Without OAEP in RSA encryption, it takes less work for an attacker to decrypt the data or infer patterns from the ciphertext. This rule logs an issue when openssl_public_encrypt is used with one the following padding constants: OPENSSL_NO_PADDING or OPENSSL_PKCS1_PADDING or OPENSSL_SSLV23_PADDING.

Noncompliant code example

function encrypt($data, $key) {
  $crypted='';
  openssl_public_encrypt($data, $crypted, $key, OPENSSL_NO_PADDING); // Noncompliant
  return $crypted;
}

Compliant solution

function encrypt($data, $key) {
  $crypted='';
  openssl_public_encrypt($data, $crypted, $key, OPENSSL_PKCS1_OAEP_PADDING);
  return $crypted;
}

Resources

php:S2278

This rule is deprecated; use S5547 instead.

Why is this an issue?

According to the US National Institute of Standards and Technology (NIST), the Data Encryption Standard (DES) is no longer considered secure:

Adopted in 1977 for federal agencies to use in protecting sensitive, unclassified information, the DES is being withdrawn because it no longer provides the security that is needed to protect federal government information.

Federal agencies are encouraged to use the Advanced Encryption Standard, a faster and stronger algorithm approved as FIPS 197 in 2001.

For similar reasons, RC2 should also be avoided.

Noncompliant code example

<?php
  $ciphertext = mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, $mode); // Noncompliant
  // ...
  $ciphertext = mcrypt_encrypt(MCRYPT_DES_COMPAT, $key, $plaintext, $mode); // Noncompliant
  // ...
  $ciphertext = mcrypt_encrypt(MCRYPT_TRIPLEDES, $key, $plaintext, $mode); // Noncompliant
  // ...
  $ciphertext = mcrypt_encrypt(MCRYPT_3DES, $key, $plaintext, $mode); // Noncompliant

  $cipher = "des-ede3-cfb";  // Noncompliant
  $ciphertext_raw = openssl_encrypt($plaintext, $cipher, $key, $options=OPENSSL_RAW_DATA, $iv);
?>

Compliant solution

<?php
  $ciphertext = mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $plaintext, MCRYPT_MODE_CBC, $iv);
?>

Resources

php:S3336

PHP session tokens are normally transmitted through HTTP cookies. However, for clients that do not support cookies and when the PHP session.use_trans_sid setting is enabled, those tokens can be transmitted as URL parameters.

Why is this an issue?

GET URL parameter can be disclosed in a variety of ways:

  • Directly in a web browser address bar.
  • In navigation history.
  • In web servers or intermediate proxies log files.

What is the potential impact?

Attackers with access to any of those disclosure locations will be able to see and steal a victim’s session token. They can then use it to log in as the user, impersonate their account, and take advantage of their privileges.

Such an attack can be more or less severe depending on the victim’s privileges. Common security impacts range from data theft to application takeover.

Data theft

Attackers with access to a compromised account will be able to disclose any information stored on it. This includes the Personally Identifiable Information (PII) of the user.

The confidentiality of PII is a requirement from national security regulatory authorities in most countries. Insufficiently protecting this data could have legal consequences and lead to fines or other prosecutions.

Application takeover

Attackers compromise the account of a high-privileged user could modify internal web application logic, disrupt workflows, or change other application’s settings in a way that will give them full control over it.

Such an attack would lead to reputational damages and financial and legal consequences.

How to fix it

Code examples

Noncompliant code example

; php.ini
session.use_trans_sid=1  ; Noncompliant

Compliant solution

; php.ini
session.use_trans_sid=0

How does this work?

The compliant code example disables the session.use_trans_sid setting.

Note that this parameter is off by default.

Resources

Standards

php:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext.

Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution.

For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Mcrypt

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, "ecb"); // Noncompliant

Compliant solution

Mcrypt is deprecated and should not be used. You can use Sodium instead.

For the AES symmetric cipher, use the GCM mode:

sodium_crypto_aead_aes256gcm_encrypt($plaintext, '', $nonce, $key);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: use authenticated encryption modes

The best-known authenticated encryption mode for AES is Galois/Counter mode (GCM).

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

php:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Mcrypt

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

mcrypt_encrypt(MCRYPT_DES, $key, $plaintext, $mode); // Noncompliant

Compliant solution

Mcrypt is deprecated and should not be used. You can use Sodium instead.

sodium_crypto_aead_aes256gcm_encrypt($plaintext, '', $nonce, $key);

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

php:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the rand() and mt_rand() functions rely on a pseudorandom number generator, it should not be used for security-critical applications or for protecting sensitive data.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use functions which rely on a cryptographically strong random number generator such as random_int() or random_bytes() or openssl_random_pseudo_bytes()
  • When using openssl_random_pseudo_bytes(), provide and check the crypto_strong parameter
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

$random = rand();
$random2 = mt_rand(0, 99);

Compliant Solution

$randomInt = random_int(0,99); // Compliant; generates a cryptographically secure random integer

See

php:S3334

File access functions in PHP are typically used to open local files. They are also capable of reading files from remote servers using protocols such as HTTP, HTTPS and FTP.

This behavior is controlled by the allow_url_fopen and allow_url_include settings.

Why is this an issue?

Most applications do not require or expect the file access functions to download remotely accessible files. However, attackers can abuse these remote file access features while exploiting other vulnerabilities, such as path traversal issues.

What is the potential impact?

While activating these settings does not pose a direct threat to the application’s security, they can make the exploitation of other vulnerabilities easier and more severe.

If an attacker can control a file location while allow_url_fopen is set to 1, they can use this ability to perform a Server-Side Request Forgery exploit. This allows the attacker to affect more than just the local application and they may be able to laterally attack other assets on the local network.

If allow_url_include is set to 1, the attacker will also have the ability to download and execute arbitrary PHP code.

How to fix it

allow_url_fopen and allow_url_include should be deactivated in the main PHP configuration file. Note that allow_url_include is disabled by default while allow_url_fopen is not and must be explicitly disabled.

Code examples

Noncompliant code example

; php.ini  Noncompliant; allow_url_fopen is enabled by default
allow_url_include=1  ; Noncompliant

Compliant solution

; php.ini
allow_url_fopen=0
allow_url_include=0

Resources

Standards

php:S3335

The cgi.force_redirect php.ini configuration controls the behavior of the PHP engine when used in CGI mode. In particular, it prevents CGI scripts from being directly requested without prior web server or application processing.

When disabled, CGI scripts can be requested directly.

Why is this an issue?

Pre-processing on the server side is often required to check users authentication when working in CGI mode. Those preliminary actions can also position diverse configuration parameters necessary for the CGI script to work correctly.

What is the potential impact?

CGI scripts might behave unexpectedly if the proper configuration is not set up before they are accessed.

Most serious security-related consequences will affect the authorization and authentication mechanisms of the application. When the web server is responsible for authenticating clients and forwarding the proper identity to the script, direct access will bypass this authentication step.

Attackers could also provide arbitrary identities to the CGI script by forging specific HTTP headers or parameters. They could then impersonate any legitimate user of the application.

How to fix it

cgi.force_redirect should be enforced in the main PHP configuration file.

Note that this parameter is enabled by default.

Code examples

Noncompliant code example

; php.ini
cgi.force_redirect=0  ; Noncompliant

Compliant solution

; php.ini
cgi.force_redirect=1  ; Noncompliant

Pitfalls

The cgi.force_redirect is not supported by all web servers. For example, Microsoft IIS web server is unable to differentiate an internally redirected request from a normal one.

While using such a server, the cgi.force_redirect parameter will have to be disabled for the CGI scripts to work properly. In that case, it is important to ensure the CGI behavior is aware of the security threat.

Resources

Standards

php:S3337

The enable_dl PHP configuration setting allows PHP extensions to be loaded dynamically at runtime.

Why is this an issue?

When dynamic loading is enabled, PHP code can load arbitrary PHP extensions by calling the dl function. This can be used to bypass restrictions set with the open_basedir configuration.

PHP defaults to allowing dynamic loading.

How to fix it

enable_dl setting should be set to 0 in the main PHP configuration.

Code examples

Noncompliant code example

; php.ini
enable_dl=1  ; Noncompliant

Compliant solution

; php.ini
enable_dl=0

Resources

Standards

php:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Core PHP

Code examples

Noncompliant code example

$opts = array(
  'ssl' => [
    'crypto_method' => STREAM_CRYPTO_METHOD_TLSv1_1_CLIENT // Noncompliant
  ],
  'http'=>array(
    'method'=>"GET"
  )
);

$context = stream_context_create($opts);

$fp = fopen('https://www.example.com', 'r', false, $context);
fpassthru($fp);
fclose($fp);

Compliant solution

$opts = array(
  'ssl' => [
    'crypto_method' => STREAM_CRYPTO_METHOD_TLSv1_2_CLIENT
  ],
  'http'=>array(
    'method'=>"GET"
  )
);

$context = stream_context_create($opts);

$fp = fopen('https://www.example.com', 'r', false, $context);
fpassthru($fp);
fclose($fp);

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

php:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Core PHP

Code examples

Noncompliant code example

Here is an example of a private key generation with RSA:

$config = [
    "digest_alg"       => "sha512",
    "private_key_bits" => 1024,                 // Noncompliant
    "private_key_type" => OPENSSL_KEYTYPE_RSA,
];

$res = openssl_pkey_new($config);

Compliant solution

$config = [
    "digest_alg"       => "sha512",
    "private_key_bits" => 2048,
    "private_key_type" => OPENSSL_KEYTYPE_RSA,
];

$res = openssl_pkey_new($config);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms is mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC-based algorithms.

Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:

  • NIST P curves with a size of at least 224 bits, e.g. secp256r1.
  • Curve25519, generally known as ed25519 or x25519 depending on its application.
  • Curve448.
  • Brainpool curves with a size of at least 224 bits, e.g. brainpoolP224r1

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

php:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

Builtin functions

function myEncrypt($cipher, $key, $data, $mode, $iv, $options, $padding, $infile, $outfile, $recipcerts, $headers, $nonce, $ad, $pub_key_ids, $env_keys)
{
    mcrypt_ecb ($cipher, $key, $data, $mode); // Sensitive
    mcrypt_cfb($cipher, $key, $data, $mode, $iv); // Sensitive
    mcrypt_cbc($cipher, $key, $data, $mode, $iv); // Sensitive
    mcrypt_encrypt($cipher, $key, $data, $mode); // Sensitive

    openssl_encrypt($data, $cipher, $key, $options, $iv); // Sensitive
    openssl_public_encrypt($data, $crypted, $key, $padding); // Sensitive
    openssl_pkcs7_encrypt($infile, $outfile, $recipcerts, $headers); // Sensitive
    openssl_seal($data, $sealed_data, $env_keys, $pub_key_ids); // Sensitive

    sodium_crypto_aead_aes256gcm_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_aead_chacha20poly1305_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_aead_chacha20poly1305_ietf_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_aead_xchacha20poly1305_ietf_encrypt ($data, $ad, $nonce, $key); // Sensitive
    sodium_crypto_box_seal ($data, $key); // Sensitive
    sodium_crypto_box ($data, $nonce, $key); // Sensitive
    sodium_crypto_secretbox ($data, $nonce, $key); // Sensitive
    sodium_crypto_stream_xor ($data, $nonce, $key); // Sensitive
}

CakePHP

use Cake\Utility\Security;

function myCakeEncrypt($key, $data, $engine)
{
    Security::encrypt($data, $key); // Sensitive

    // Do not use custom made engines and remember that Mcrypt is deprecated.
    Security::engine($engine); // Sensitive. Setting the encryption engine.
}

CodeIgniter

class EncryptionController extends CI_Controller
{
    public function __construct()
    {
        parent::__construct();
        $this->load->library('encryption');
    }

    public function index()
    {
        $this->encryption->create_key(16); // Sensitive. Review the key length.
        $this->encryption->initialize( // Sensitive.
            array(
                'cipher' => 'aes-256',
                'mode' => 'ctr',
                'key' => 'the key',
            )
        );
        $this->encryption->encrypt("mysecretdata"); // Sensitive.
    }
}

CraftCMS version 3

use Craft;

// This is similar to Yii as it used by CraftCMS
function craftEncrypt($data, $key, $password) {
    Craft::$app->security->encryptByKey($data, $key); // Sensitive
    Craft::$app->getSecurity()->encryptByKey($data, $key); // Sensitive
    Craft::$app->security->encryptByPassword($data, $password); // Sensitive
    Craft::$app->getSecurity()->encryptByPassword($data, $password); // Sensitive
}

Drupal 7 - Encrypt module

function drupalEncrypt() {
    $encrypted_text = encrypt('some string to encrypt'); // Sensitive
}

Joomla

use Joomla\Crypt\CipherInterface;

abstract class MyCipher implements CipherInterface // Sensitive. Implementing custom cipher class
{}

function joomlaEncrypt() {
    new Joomla\Crypt\Cipher_Sodium(); // Sensitive
    new Joomla\Crypt\Cipher_Simple(); // Sensitive
    new Joomla\Crypt\Cipher_Rijndael256(); // Sensitive
    new Joomla\Crypt\Cipher_Crypto(); // Sensitive
    new Joomla\Crypt\Cipher_Blowfish(); // Sensitive
    new Joomla\Crypt\Cipher_3DES(); // Sensitive
}
}

Laravel

use Illuminate\Support\Facades\Crypt;

function myLaravelEncrypt($data)
{
    Crypt::encryptString($data); // Sensitive
    Crypt::encrypt($data); // Sensitive
    // encrypt using the Laravel "encrypt" helper
    encrypt($data); // Sensitive
}

PHP-Encryption library

use Defuse\Crypto\Crypto;
use Defuse\Crypto\File;

function mypPhpEncryption($data, $key, $password, $inputFilename, $outputFilename, $inputHandle, $outputHandle) {
    Crypto::encrypt($data, $key); // Sensitive
    Crypto::encryptWithPassword($data, $password); // Sensitive
    File::encryptFile($inputFilename, $outputFilename, $key); // Sensitive
    File::encryptFileWithPassword($inputFilename, $outputFilename, $password); // Sensitive
    File::encryptResource($inputHandle, $outputHandle, $key); // Sensitive
    File::encryptResourceWithPassword($inputHandle, $outputHandle, $password); // Sensitive
}

PhpSecLib

function myphpseclib($mode) {
    new phpseclib\Crypt\RSA(); // Sensitive. Note: RSA can also be used for signing data.
    new phpseclib\Crypt\AES(); // Sensitive
    new phpseclib\Crypt\Rijndael(); // Sensitive
    new phpseclib\Crypt\Twofish(); // Sensitive
    new phpseclib\Crypt\Blowfish(); // Sensitive
    new phpseclib\Crypt\RC4(); // Sensitive
    new phpseclib\Crypt\RC2(); // Sensitive
    new phpseclib\Crypt\TripleDES(); // Sensitive
    new phpseclib\Crypt\DES(); // Sensitive

    new phpseclib\Crypt\AES($mode); // Sensitive
    new phpseclib\Crypt\Rijndael($mode); // Sensitive
    new phpseclib\Crypt\TripleDES($mode); // Sensitive
    new phpseclib\Crypt\DES($mode); // Sensitive
}

Sodium Compat library

function mySodiumCompatEncrypt($data, $ad, $nonce, $key) {
    ParagonIE_Sodium_Compat::crypto_aead_chacha20poly1305_ietf_encrypt($data, $ad, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_aead_xchacha20poly1305_ietf_encrypt($data, $ad, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_aead_chacha20poly1305_encrypt($data, $ad, $nonce, $key); // Sensitive

    ParagonIE_Sodium_Compat::crypto_aead_aes256gcm_encrypt($data, $ad, $nonce, $key); // Sensitive

    ParagonIE_Sodium_Compat::crypto_box($data, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_secretbox($data, $nonce, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_box_seal($data, $key); // Sensitive
    ParagonIE_Sodium_Compat::crypto_secretbox_xchacha20poly1305($data, $nonce, $key); // Sensitive
}

Yii version 2

use Yii;

// Similar to CraftCMS as it uses Yii
function YiiEncrypt($data, $key, $password) {
    Yii::$app->security->encryptByKey($data, $key); // Sensitive
    Yii::$app->getSecurity()->encryptByKey($data, $key); // Sensitive
    Yii::$app->security->encryptByPassword($data, $password); // Sensitive
    Yii::$app->getSecurity()->encryptByPassword($data, $password); // Sensitive
}

Zend

use Zend\Crypt\FileCipher;
use Zend\Crypt\PublicKey\DiffieHellman;
use Zend\Crypt\PublicKey\Rsa;
use Zend\Crypt\Hybrid;
use Zend\Crypt\BlockCipher;

function myZendEncrypt($key, $data, $prime, $options, $generator, $lib)
{
    new FileCipher; // Sensitive. This is used to encrypt files

    new DiffieHellman($prime, $generator, $key); // Sensitive

    $rsa = Rsa::factory([ // Sensitive
        'public_key'    => 'public_key.pub',
        'private_key'   => 'private_key.pem',
        'pass_phrase'   => 'mypassphrase',
        'binary_output' => false,
    ]);
    $rsa->encrypt($data); // No issue raised here. The configuration of the Rsa object is the line to review.

    $hybrid = new Hybrid(); // Sensitive

    BlockCipher::factory($lib, $options); // Sensitive
}

See

php:S5876

An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled.

Why is this an issue?

Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:

  • When a user visits a website or logs in, a session is created for them.
  • This session is assigned a unique session identifier, stored in a cookie, in local storage, or through URL parameters.
  • In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. For example, the attacker sends the victim an email containing a link with this predetermined session identifier.
  • When the victim clicks on the link, the web application does not create a new session identifier but uses this identifier known to the attacker.
  • At this point, the attacker can hijack and impersonate the victim’s session.

What is the potential impact?

Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following:

Impersonation

Once an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf.

Data Breach

If an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes.

Privilege Escalation

In some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems.

How to fix it in Symfony

Code examples

In a Symfony Security’s context, session fixation protection can be disabled with the value none for the session_fixation_strategy attribute.

Session fixation protection is enabled by default in Symfony. It can be explicitly enabled with the values migrate and invalidate for the session_fixation_strategy attribute.

Noncompliant code example

namespace Symfony\Component\DependencyInjection\Loader\Configurator;

return static function (ContainerConfigurator $container) {
    $container->extension('security', [
        'session_fixation_strategy' => 'none', // Noncompliant
    ]);
};

Compliant solution

namespace Symfony\Component\DependencyInjection\Loader\Configurator;

return static function (ContainerConfigurator $container) {
    $container->extension('security', [
        'session_fixation_strategy' => 'migrate',
    ]);
};

How does this work?

The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process.

Here’s how session fixation protection typically works:

  1. When a user visits a website or logs in, a session is created for them. This session is assigned a unique session identifier, which is stored in a cookie or passed through URL parameters.
  2. In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. This allows the attacker to potentially gain unauthorized access to the user’s session.
  3. To protect against session fixation attacks, session fixation protection mechanisms come into play during the authentication process. When a user successfully authenticates, this mechanism generates a new session identifier for the user’s session.
  4. The old session identifier, which may have been manipulated by the attacker, is invalidated and no longer associated with the user’s session. This ensures that any attempts by the attacker to use the fixed session identifier are rendered ineffective.
  5. The user is then assigned the new session identifier, which is used for subsequent requests and session tracking. This new session identifier is typically stored in a new session cookie or passed through URL parameters.

By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process.

Resources

Documentation

Security Configuration Reference - Session Fixation Strategy

Standards

php:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

In php.ini you can specify the flags for the session cookie which is security-sensitive:

session.cookie_httponly = 0;  // Sensitive: this sensitive session cookie is created with the httponly flag set to false and so it can be stolen easily in case of XSS vulnerability

Same thing in PHP code:

session_set_cookie_params($lifetime, $path, $domain, true, false);  // Sensitive: this sensitive session cookie is created with the httponly flag (the fifth argument) set to false and so it can be stolen easily in case of XSS vulnerability

If you create a custom security-sensitive cookie in your PHP code:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true, false); // Sensitive: this sensitive cookie is created with the httponly flag (the seventh argument) set to false  and so it can be stolen easily in case of XSS vulnerability

By default setcookie and setrawcookie functions set httpOnly flag to false (the seventh argument) and so cookies can be stolen easily in case of XSS vulnerability:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true); // Sensitive: a sensitive cookie is created with the httponly flag  (the seventh argument) not defined (by default set to false)
setrawcookie($name, $value, $expire, $path, $domain, true); // Sensitive: a sensitive cookie is created with the httponly flag (the seventh argument) not defined  (by default set to false)

Compliant Solution

session.cookie_httponly = 1; // Compliant: the sensitive cookie is protected against theft thanks (cookie_httponly=1)
session_set_cookie_params($lifetime, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the fifth argument set to true (HttpOnly=true)
$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the seventh argument set to true (HttpOnly=true)
setrawcookie($name, $value, $expire, $path, $domain, true, true); // Compliant: the sensitive cookie is protected against theft thanks to the seventh argument set to true (HttpOnly=true)

See

php:S3332

This rule is deprecated, and will eventually be removed.

Why is this an issue?

Cookies without fixed lifetimes or expiration dates are known as non-persistent, or "session" cookies, meaning they last only as long as the browser session, and poof away when the browser closes. Cookies with expiration dates, "persistent" cookies, are stored/persisted until those dates.

Non-persistent cookies should be used for the management of logged-in sessions on web sites. To make a cookie non-persistent, simply omit the expires attribute.

This rule raises an issue when expires is set for a session cookie, either programmatically or via configuration, such as session.cookie_lifetime.

Resources

php:S3333

When accessing files on the local filesystem, PHP can enforce security checks to defend against some attacks. The open_basedir setting in the main PHP configuration defines a set of directories that the application is allowed to access. Access to locations outside of these directories will be blocked.

Why is this an issue?

The PHP runtime will allow the application to access all files underneath the configured set of directories. If no value is set, the application may access any file on the filesystem.

What is the potential impact?

open_basedir is commonly used to ensure that a PHP application can only access files needed for the application function. While deactivating this setting does not pose a direct threat to the application’s security, it can make exploitation of other vulnerabilities easier and more severe.

If an attacker can exploit a path traversal vulnerability, they will be able to access any file made available to the application’s user account. This may include system-critical or otherwise sensitive files.

In shared hosting environments, a vulnerability can affect all co-hosted applications and not only the vulnerable one. open_basedir can help limit the scope of the compromise in that case.

How to fix it

The main PHP configuration should define the open_basedir setting. This setting should not include overly large directories, such as the root directory of the filesystem.

Adding the current directory, denoted by “.”, to the open_basedir configuration is also dangerous. It is possible to change the current directory within PHP scripts by calling chdir(), effectively removing any protection.

Code examples

Noncompliant code example

; php.ini
open_basedir="/:${USER}/scripts/data"  ; Noncompliant; root directory in the list
; php.ini
; open_basedir= ; Noncompliant; setting commented out

Compliant solution

; php.ini
open_basedir="${USER}/scripts/data"
; php.ini try 1
open_basedir="/var/www/myapp/data"

Resources

Standards

php:S4784

This rule is deprecated; use S2631 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as /(a+)+s/ will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and contains at at least two instances of any of the following characters *+{.

Example: (a+)*

The following functions are detected as executing regular expressions:

Note that ereg* functions have been removed in PHP 7 and PHP 5 end of life date is the 1st of January 2019. Using PHP 5 is dangerous as there will be no security fix.

This rule’s goal is to guide security code reviews.

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not set the constant pcre.backtrack_limit to a high value as it will increase the resource consumption of PCRE functions.

Check the error codes of PCRE functions via preg_last_error.

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using. Do not run vulnerable regular expressions on user input.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Avoid executing a user input string as a regular expression or use at least preg_quote to escape regular expression characters.

Exceptions

An issue will be created for the functions mb_ereg_search_pos, mb_ereg_search_regs and mb_ereg_search if and only if at least the first argument, i.e. the $pattern, is provided.

The current implementation does not follow variables. It will only detect regular expressions hard-coded directly in the function call.

$pattern = "/(a+)+/";
$result = eregi($pattern, $input);  // No issue will be raised even if it is Sensitive

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

php:S2255

This rule is deprecated, and will eventually be removed.

Using cookies is security-sensitive. It has led in the past to the following vulnerabilities:

Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed.

This rule flags code that writes cookies.

Ask Yourself Whether

  • sensitive information is stored inside the cookie.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Cookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session.

Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed.

Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies.

As a side note, every information read from a cookie should be Sanitized.

Sensitive Code Example

$value = "1234 1234 1234 1234";

// Review this cookie as it seems to send sensitive information (credit card number).
setcookie("CreditCardNumber", $value, $expire, $path, $domain, true, true); // Sensitive
setrawcookie("CreditCardNumber", $value, $expire, $path, $domain, true, true); // Sensitive

See

php:S3331

This rule is deprecated, and will eventually be removed.

A cookie’s domain specifies which websites should be able to read it. Left blank, browsers are supposed to only send the cookie to sites that exactly match the sending domain. For example, if a cookie was set by lovely.dream.com, it should only be readable by that domain, and not by nightmare.com or even strange.dream.com. If you want to allow sub-domain access for a cookie, you can specify it by adding a dot in front of the cookie’s domain, like so: .dream.com. But cookie domains should always use at least two levels.

Cookie domains can be set either programmatically or via configuration. This rule raises an issue when any cookie domain is set with a single level, as in .com.

Ask Yourself Whether

  • the domain attribute has only one level as domain naming.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • You should check the domain attribute has been set and its value has more than one level of domain nanimg, like: sonarsource.com

Sensitive Code Example

setcookie("TestCookie", $value, time()+3600, "/~path/", ".com", 1); // Noncompliant
session_set_cookie_params(3600, "/~path/", ".com"); // Noncompliant

// inside php.ini
session.cookie_domain=".com"; // Noncompliant

Compliant Solution

setcookie("TestCookie", $value, time()+3600, "/~path/", ".myDomain.com", 1);
session_set_cookie_params(3600, "/~path/", ".myDomain.com");

// inside php.ini
session.cookie_domain=".myDomain.com";

See

php:S3338

This rule is deprecated, and will eventually be removed.

Why is this an issue?

file_uploads is an on-by-default PHP configuration that allows files to be uploaded to your site. Since accepting candy files from strangers is inherently dangerous, this feature should be disabled unless it is absolutely necessary for your site.

This rule raises an issue when file_uploads is not explicitly disabled.

Noncompliant code example

; php.ini
file_uploads=1  ; Noncompliant

Compliant solution

; php.ini
file_uploads=0

Resources

php:S4433

Lightweight Directory Access Protocol (LDAP) servers provide two main authentication methods: the SASL and Simple ones. The Simple Authentication method also breaks down into three different mechanisms:

  • Anonymous Authentication
  • Unauthenticated Authentication
  • Name/Password Authentication

A server that accepts either the Anonymous or Unauthenticated mechanisms will accept connections from clients not providing credentials.

Why is this an issue?

When configured to accept the Anonymous or Unauthenticated authentication mechanism, an LDAP server will accept connections from clients that do not provide a password or other authentication credentials. Such users will be able to read or modify part or all of the data contained in the hosted directory.

What is the potential impact?

An attacker exploiting unauthenticated access to an LDAP server can access the data that is stored in the corresponding directory. The impact varies depending on the permission obtained on the directory and the type of data it stores.

Authentication bypass

If attackers get write access to the directory, they will be able to alter most of the data it stores. This might include sensitive technical data such as user passwords or asset configurations. Such an attack can typically lead to an authentication bypass on applications and systems that use the affected directory as an identity provider.

In such a case, all users configured in the directory might see their identity and privileges taken over.

Sensitive information leak

If attackers get read-only access to the directory, they will be able to read the data it stores. That data might include security-sensitive pieces of information.

Typically, attackers might get access to user account lists that they can use in further intrusion steps. For example, they could use such lists to perform password spraying, or related attacks, on all systems that rely on the affected directory as an identity provider.

If the directory contains some Personally Identifiable Information, an attacker accessing it might represent a violation of regulatory requirements in some countries. For example, this kind of security event would go against the European GDPR law.

How to fix it

Code examples

The following code indicates an anonymous LDAP authentication vulnerability because it binds to a remote server using an Anonymous Simple authentication mechanism.

Noncompliant code example

$ldapconn = ldap_connect("ldap.example.com");

if ($ldapconn) {
    $ldapbind = ldap_bind($ldapconn); // Noncompliant
}

Compliant solution

$ldaprdn  = 'uname';
$ldappass = 'password';

$ldapconn = ldap_connect("ldap.example.com");

if ($ldapconn) {
    $ldapbind = ldap_bind($ldapconn, $ldaprdn, $ldappass); // Compliant
}

Resources

Documentation

Standards

php:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

$hash = md5($data); // Sensitive
$hash = sha1($data);   // Sensitive

Compliant Solution

// for a password
$hash = password_hash($password, PASSWORD_BCRYPT); // Compliant

// other context
$hash = hash("sha512", $data);

See

php:S4792

This rule is deprecated, and will eventually be removed.

Configuring loggers is security-sensitive. It has led in the past to the following vulnerabilities:

Logs are useful before, during and after a security incident.

  • Attackers will most of the time start their nefarious work by probing the system for vulnerabilities. Monitoring this activity and stopping it is the first step to prevent an attack from ever happening.
  • In case of a successful attack, logs should contain enough information to understand what damage an attacker may have inflicted.

Logs are also a target for attackers because they might contain sensitive information. Configuring loggers has an impact on the type of information logged and how they are logged.

This rule flags for review code that initiates loggers configuration. The goal is to guide security code reviews.

Ask Yourself Whether

  • unauthorized users might have access to the logs, either because they are stored in an insecure location or because the application gives access to them.
  • the logs contain sensitive information on a production server. This can happen when the logger is in debug mode.
  • the log can grow without limit. This can happen when additional information is written into logs every time a user performs an action and the user can perform the action as many times as he/she wants.
  • the logs do not contain enough information to understand the damage an attacker might have inflicted. The loggers mode (info, warn, error) might filter out important information. They might not print contextual information like the precise time of events or the server hostname.
  • the logs are only stored locally instead of being backuped or replicated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Check that your production deployment doesn’t have its loggers in "debug" mode as it might write sensitive information in logs.
  • Production logs should be stored in a secure location which is only accessible to system administrators.
  • Configure the loggers to display all warnings, info and error messages. Write relevant information such as the precise time of events and the hostname.
  • Choose log format which is easy to parse and process automatically. It is important to process logs rapidly in case of an attack so that the impact is known and limited.
  • Check that the permissions of the log files are correct. If you index the logs in some other service, make sure that the transfer and the service are secure too.
  • Add limits to the size of the logs and make sure that no user can fill the disk with logs. This can happen even when the user does not control the logged information. An attacker could just repeat a logged action many times.

Remember that configuring loggers properly doesn’t make them bullet-proof. Here is a list of recommendations explaining on how to use your logs:

  • Don’t log any sensitive information. This obviously includes passwords and credit card numbers but also any personal information such as user names, locations, etc…​ Usually any information which is protected by law is good candidate for removal.
  • Sanitize all user inputs before writing them in the logs. This includes checking its size, content, encoding, syntax, etc…​ As for any user input, validate using whitelists whenever possible. Enabling users to write what they want in your logs can have many impacts. It could for example use all your storage space or compromise your log indexing service.
  • Log enough information to monitor suspicious activities and evaluate the impact an attacker might have on your systems. Register events such as failed logins, successful logins, server side input validation failures, access denials and any important transaction.
  • Monitor the logs for any suspicious activity.

Sensitive Code Example

Basic PHP configuration:

function configure_logging() {
  error_reporting(E_RECOVERABLE_ERROR); // Sensitive
  error_reporting(32); // Sensitive

  ini_set('docref_root', '1'); // Sensitive
  ini_set('display_errors', '1'); // Sensitive
  ini_set('display_startup_errors', '1'); // Sensitive
  ini_set('error_log', "path/to/logfile"); // Sensitive - check logfile is secure
  ini_set('error_reporting', E_PARSE ); // Sensitive
  ini_set('error_reporting', 64); // Sensitive
  ini_set('log_errors', '0'); // Sensitive
  ini_set('log_errors_max_length', '512'); // Sensitive
  ini_set('ignore_repeated_errors', '1'); // Sensitive
  ini_set('ignore_repeated_source', '1'); // Sensitive
  ini_set('track_errors', '0'); // Sensitive

  ini_alter('docref_root', '1'); // Sensitive
  ini_alter('display_errors', '1'); // Sensitive
  ini_alter('display_startup_errors', '1'); // Sensitive
  ini_alter('error_log', "path/to/logfile"); // Sensitive - check logfile is secure
  ini_alter('error_reporting', E_PARSE ); // Sensitive
  ini_alter('error_reporting', 64); // Sensitive
  ini_alter('log_errors', '0'); // Sensitive
  ini_alter('log_errors_max_length', '512'); // Sensitive
  ini_alter('ignore_repeated_errors', '1'); // Sensitive
  ini_alter('ignore_repeated_source', '1'); // Sensitive
  ini_alter('track_errors', '0'); // Sensitive
}

Definition of custom loggers with psr/log

abstract class MyLogger implements \Psr\Log\LoggerInterface { // Sensitive
    // ...
}

abstract class MyLogger2 extends \Psr\Log\AbstractLogger { // Sensitive
    // ...
}

abstract class MyLogger3 {
    use \Psr\Log\LoggerTrait; // Sensitive
    // ...
}

Exceptions

No issue will be raised for logger configuration when it follows recommended settings for production servers. The following examples are all valid:

  ini_set('docref_root', '0');
  ini_set('display_errors', '0');
  ini_set('display_startup_errors', '0');

  error_reporting(0);
  ini_set('error_reporting', 0);

  ini_set('log_errors', '1');
  ini_set('log_errors_max_length', '0');
  ini_set('ignore_repeated_errors', '0');
  ini_set('ignore_repeated_source', '0');
  ini_set('track_errors', '1');

See

php:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in cURL

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by setting CURLOPT_SSL_VERIFYHOST to 0 or false. To enable validation set the value to 2 or true or do not set CURLOPT_SSL_VERIFYHOST at all to use the secure default value.

Noncompliant code example

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0);  // Noncompliant
curl_exec($curl);
curl_close($curl);

Compliant solution

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 2);
curl_exec($curl);
curl_close($curl);

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

php:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

$password = "65DBGgwe4uazdWQA"; // Sensitive

$httpUrl = "https://example.domain?user=user&password=65DBGgwe4uazdWQA" // Sensitive
$sshUrl = "ssh://user:65DBGgwe4uazdWQA@example.domain" // Sensitive

Compliant Solution

$user = getUser();
$password = getPassword(); // Compliant

$httpUrl = "https://example.domain?user=$user&password=$password" // Compliant
$sshUrl = "ssh://$user:$password@example.domain" // Compliant

See

php:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

$url = "http://example.com"; // Sensitive
$url = "ftp://anonymous@example.com"; // Sensitive
$url = "telnet://anonymous@example.com"; // Sensitive

$con = ftp_connect('example.com'); // Sensitive

$trans = (new Swift_SmtpTransport('XXX', 1234)); // Sensitive

$mailer = new PHPMailer(true); // Sensitive

define( 'FORCE_SSL_ADMIN', false); // Sensitive
define( 'FORCE_SSL_LOGIN', false); // Sensitive

Compliant Solution

$url = "https://example.com";
$url = "sftp://anonymous@example.com";
$url = "ssh://anonymous@example.com";

$con = ftp_ssl_connect('example.com');

$trans = (new Swift_SmtpTransport('smtp.example.org', 1234))
  ->setEncryption('tls')
;

$mailer = new PHPMailer(true);
$mailer->SMTPSecure = 'tls';

define( 'FORCE_SSL_ADMIN', true);
define( 'FORCE_SSL_LOGIN', true);

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

php:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

For Symfony Constraints:

use Symfony\Component\Validator\Constraints as Assert;
use Symfony\Component\Validator\Mapping\ClassMetadata;

class TestEntity
{
    public static function loadValidatorMetadata(ClassMetadata $metadata)
    {
        $metadata->addPropertyConstraint('upload', new Assert\File([
            'maxSize' => '100M', // Sensitive
        ]));
    }
}

For Laravel Validator:

use App\Http\Controllers\Controller;
use Illuminate\Http\Request;

class TestController extends Controller
{
    public function test(Request $request)
    {
        $validatedData = $request->validate([
            'upload' => 'required|file', // Sensitive
        ]);
    }
}

Compliant Solution

For Symfony Constraints:

use Symfony\Component\Validator\Constraints as Assert;
use Symfony\Component\Validator\Mapping\ClassMetadata;

class TestEntity
{
    public static function loadValidatorMetadata(ClassMetadata $metadata)
    {
        $metadata->addPropertyConstraint('upload', new Assert\File([
            'maxSize' => '8M', // Compliant
        ]));
    }
}

For Laravel Validator:

use App\Http\Controllers\Controller;
use Illuminate\Http\Request;

class TestController extends Controller
{
    public function test(Request $request)
    {
        $validatedData = $request->validate([
            'upload' => 'required|file|max:8000', // Compliant
        ]);
    }
}

See

php:S6437

Secret leaks often occur when a sensitive piece of authentication data is stored with the source code of an application. Considering the source code is intended to be deployed across multiple assets, including source code repositories or application hosting servers, the secrets might get exposed to an unintended audience.

Why is this an issue?

In most cases, trust boundaries are violated when a secret is exposed in a source code repository or an uncontrolled deployment environment. Unintended people who don’t need to know the secret might get access to it. They might then be able to use it to gain unwanted access to associated services or resources.

The trust issue can be more or less severe depending on the people’s role and entitlement.

What is the potential impact?

The consequences vary greatly depending on the situation and the secret-exposed audience. Still, two main scenarios should be considered.

Financial loss

Financial losses can occur when a secret is used to access a paid third-party-provided service and is disclosed as part of the source code of client applications. Having the secret, each user of the application will be able to use it without limit to use the third party service to their own need, including in a way that was not expected.

This additional use of the secret will lead to added costs with the service provider.

Moreover, when rate or volume limiting is set up on the provider side, this additional use can prevent the regular operation of the affected application. This might result in a partial denial of service for all the application’s users.

Application’s security downgrade

A downgrade can happen when the disclosed secret is used to protect security-sensitive assets or features of the application. Depending on the affected asset or feature, the practical impact can range from a sensitive information leak to a complete takeover of the application, its hosting server or another linked component.

For example, an application that would disclose a secret used to sign user authentication tokens would be at risk of user identity impersonation. An attacker accessing the leaked secret could sign session tokens for arbitrary users and take over their privileges and entitlements.

How to fix it

Revoke the secret

Revoke any leaked secrets and remove them from the application source code.

Before revoking the secret, ensure that no other applications or processes are using it. Other usages of the secret will also be impacted when the secret is revoked.

Analyze recent secret use

When available, analyze authentication logs to identify any unintended or malicious use of the secret since its disclosure date. Doing this will allow determining if an attacker took advantage of the leaked secret and to what extent.

This operation should be part of a global incident response process.

Use a secret vault

A secret vault should be used to generate and store the new secret. This will ensure the secret’s security and prevent any further unexpected disclosure.

Depending on the development platform and the leaked secret type, multiple solutions are currently available.

Code examples

The following code example is noncompliant because it uses a hardcoded secret value.

Noncompliant code example

use Defuse\Crypto\KeyOrPassword;

function createKey() {
    $password = "3xAmpl3";  // Noncompliant
    return KeyOrPassword::createFromPassword($password);
}

Compliant solution

use Defuse\Crypto\KeyOrPassword;

function createKey() {
    $password = $_ENV["SECRET"]
    return KeyOrPassword::createFromPassword($password);
}

How does this work?

While the noncompliant code example contains a hard-coded password, the compliant solution retrieves the secret’s value from its environment. This allows to have an environment-dependent secret value and avoids storing the password in the source code itself.

Depending on the application and its underlying infrastructure, how the secret gets added to the environment might change.

Resources

Documentation

Standards

php:S2070

This rule is deprecated; use S4790 instead.

Why is this an issue?

The MD5 algorithm and its successor, SHA-1, are no longer considered secure, because it is too easy to create hash collisions with them. That is, it takes too little computational effort to come up with a different input that produces the same MD5 or SHA-1 hash, and using the new, same-hash value gives an attacker the same access as if he had the originally-hashed value. This applies as well to the other Message-Digest algorithms: MD2, MD4, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160.

Consider using safer alternatives, such as SHA-256, SHA-512 or SHA-3.

Noncompliant code example

$password = ...

if (md5($password) === '1f3870be274f6c49b3e31a0c6728957f') { // Noncompliant; md5() hashing algorithm is not secure for password management
   [...]
}

if (sha1($password) === 'd0be2dc421be4fcd0172e5afceea3970e2f3d940') { // Noncompliant; sha1() hashing algorithm is not secure for password management
   [...]
}

Resources

php:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

$id = $_GET['id'];
mysql_connect('localhost', $username, $password) or die('Could not connect: ' . mysql_error());
mysql_select_db('myDatabase') or die('Could not select database');

$result = mysql_query("SELECT * FROM myTable WHERE id = " . $id);  // Sensitive, could be susceptible to SQL injection

while ($row = mysql_fetch_object($result)) {
    echo $row->name;
}

Compliant Solution

$id = $_GET['id'];
try {
    $conn = new PDO('mysql:host=localhost;dbname=myDatabase', $username, $password);
    $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);

    $stmt = $conn->prepare('SELECT * FROM myTable WHERE id = :id');
    $stmt->execute(array('id' => $id));

    while($row = $stmt->fetch(PDO::FETCH_OBJ)) {
        echo $row->name;
    }
} catch(PDOException $e) {
    echo 'ERROR: ' . $e->getMessage();
}

Exceptions

No issue will be raised if one of the functions is called with hard-coded string (no concatenation) and this string does not contain a "$" sign.

$result = mysql_query("SELECT * FROM myTable WHERE id = 42") or die('Query failed: ' . mysql_error());  // Compliant

The current implementation does not follow variables. It will only detect SQL queries which are concatenated or contain a $ sign directly in the function call.

$query = "SELECT * FROM myTable WHERE id = " . $id;
$result = mysql_query($query);  // No issue will be raised even if it is Sensitive

See

php:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in Core PHP

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

$xml = file_get_contents('xxe.xml');
$doc = simplexml_load_string($xml, 'SimpleXMLElement', LIBXML_NOENT); // Noncompliant
$doc = new DOMDocument();
$doc->load('xxe.xml', LIBXML_NOENT); // Noncompliant
$reader = new XMLReader();
$reader->open('xxe.xml');
$reader->setParserProperty(XMLReader::SUBST_ENTITIES, true); // Noncompliant

Compliant solution

External entity substitution is disabled by default in simplexml_load_string() and DOMDocument::open().

$xml = file_get_contents('xxe.xml');
$doc = simplexml_load_string($xml, 'SimpleXMLElement');
$doc = new DOMDocument();
$doc->load('xxe.xml');
$reader = new XMLReader();
$reader->open('xxe.xml');
$reader->setParserProperty(XMLReader::SUBST_ENTITIES, false);

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

php:S4818

This rule is deprecated, and will eventually be removed.

Using sockets is security-sensitive. It has led in the past to the following vulnerabilities:

Sockets are vulnerable in multiple ways:

  • They enable a software to interact with the outside world. As this world is full of attackers it is necessary to check that they cannot receive sensitive information or inject dangerous input.
  • The number of sockets is limited and can be exhausted. Which makes the application unresponsive to users who need additional sockets.

This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections.

Ask Yourself Whether

  • sockets are created without any limit every time a user performs an action.
  • input received from sockets is used without being sanitized.
  • sensitive data is sent via sockets without being encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In many cases there is no need to open a socket yourself. Use instead libraries and existing protocols.
  • Encrypt all data sent if it is sensitive. Usually it is better to encrypt it even if the data is not sensitive as it might change later.
  • Sanitize any input read from the socket.
  • Limit the number of sockets a given user can create. Close the sockets as soon as possible.

Sensitive Code Example

function handle_sockets($domain, $type, $protocol, $port, $backlog, $addr, $hostname, $local_socket, $remote_socket, $fd) {
    socket_create($domain, $type, $protocol); // Sensitive
    socket_create_listen($port, $backlog); // Sensitive
    socket_addrinfo_bind($addr); // Sensitive
    socket_addrinfo_connect($addr); // Sensitive
    socket_create_pair($domain, $type, $protocol, $fd);

    fsockopen($hostname); // Sensitive
    pfsockopen($hostname); // Sensitive
    stream_socket_server($local_socket); // Sensitive
    stream_socket_client($remote_socket); // Sensitive
    stream_socket_pair($domain, $type, $protocol); // Sensitive
}

See

php:S2964

This rule is deprecated, and will eventually be removed.

Why is this an issue?

sleep is sometimes used in a mistaken attempt to prevent Denial of Service (DoS) attacks by throttling response rate. But because it ties up a thread, each request takes longer to serve that it otherwise would, making the application more vulnerable to DoS attacks, rather than less.

Noncompliant code example

if (is_bad_ip($requester)) {
  sleep(5);  // Noncompliant
}

Resources

php:S5328

If a session ID can be guessed (not generated with a secure pseudo random generator, or with insufficient length …​) an attacker may be able to hijack another user’s session.

Ask Yourself Whether

  • the session ID is not unique.
  • the session ID is set from a user-controlled input.
  • the session ID is generated with not secure pseudo random generator.
  • the session ID length is too short.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Don’t manually generate session IDs, use instead language based native functionality.

Sensitive Code Example

session_id(bin2hex(random_bytes(4))); // Sensitive: 4 bytes is too short
session_id($_POST["session_id"]); // Sensitive: session ID can be specified by the user

Compliant Solution

session_regenerate_id(); ; // Compliant
session_id(bin2hex(random_bytes(16))); // Compliant

See

php:S1523

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security.

This rule marks for review each occurrence of the eval function. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. The goal is to guide security code reviews.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

eval($code_to_be_dynamically_executed)

See

php:S2053

This vulnerability increases the likelihood that attackers are able to compute the cleartext of password hashes.

Why is this an issue?

During the process of password hashing, an additional component, known as a "salt," is often integrated to bolster the overall security. This salt, acting as a defensive measure, primarily wards off certain types of attacks that leverage pre-computed tables to crack passwords.

However, potential risks emerge when the salt is deemed insecure. This can occur when the salt is consistently the same across all users or when it is too short or predictable. In scenarios where users share the same password and salt, their password hashes will inevitably mirror each other. Similarly, a short salt heightens the probability of multiple users unintentionally having identical salts, which can potentially lead to identical password hashes. These identical hashes streamline the process for potential attackers to recover clear-text passwords. Thus, the emphasis on implementing secure, unique, and sufficiently lengthy salts in password-hashing functions is vital.

What is the potential impact?

Despite best efforts, even well-guarded systems might have vulnerabilities that could allow an attacker to gain access to the hashed passwords. This could be due to software vulnerabilities, insider threats, or even successful phishing attempts that give attackers the access they need.

Once the attacker has these hashes, they will likely attempt to crack them using a couple of methods. One is brute force, which entails trying every possible combination until the correct password is found. While this can be time-consuming, having the same salt for all users or a short salt can make the task significantly easier and faster.

If multiple users have the same password and the same salt, their password hashes would be identical. This means that if an attacker successfully cracks one hash, they have effectively cracked all identical ones, granting them access to multiple accounts at once.

A short salt, while less critical than a shared one, still increases the odds of different users having the same salt. This might create clusters of password hashes with identical salt that can then be attacked as explained before.

With short salts, the probability of a collision between two users' passwords and salts couple might be low depending on the salt size. The shorter the salt, the higher the collision probability. In any case, using longer, cryptographically secure salt should be preferred.

Exceptions

To securely store password hashes, it is a recommended to rely on key derivation functions that are computationally intensive. Examples of such functions are:

  • Argon2
  • PBKDF2
  • Scrypt
  • Bcrypt

When they are used for password storage, using a secure, random salt is required.

However, those functions can also be used for other purposes such as master key derivation or password-based pre-shared key generation. In those cases, the implemented cryptographic protocol might require using a fixed salt to derive keys in a deterministic way. In such cases, using a fixed salt is safe and accepted.

How to fix it in Core PHP

Code examples

The following code contains examples of hard-coded salts.

Noncompliant code example

$salt = 'salty';
$hash = hash_pbkdf2('sha256', $password, $salt, 100000); // Noncompliant

Compliant solution

$salt = random_bytes(16);
$hash = hash_pbkdf2('sha256', $password, $salt, 100000);

How does this work?

This code ensures that each user’s password has a unique salt value associated with it. It generates a salt randomly and with a length that provides the required security level. It uses a salt length of at least 32 bytes (256 bits), as recommended by industry standards.

Here, the compliant code example ensures the salt is random and has a sufficient length by calling the random_bytes function with a length parameter set to 16. This one internally uses a cryptographically secure pseudo random number generator.

Resources

Standards

php:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

chmod("foo", 0777); // Sensitive
umask(0); // Sensitive
umask(0750); // Sensitive

For Symfony Filesystem:

use Symfony\Component\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0777); // Sensitive

For Laravel Filesystem:

use Illuminate\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0777); // Sensitive

Compliant Solution

chmod("foo", 0750); // Compliant
umask(0027); // Compliant

For Symfony Filesystem:

use Symfony\Component\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0750); // Compliant

For Laravel Filesystem:

use Illuminate\Filesystem\Filesystem;

$fs = new Filesystem();
$fs->chmod("foo", 0750); // Compliant

See

php:S6345

External requests initiated by a WordPress server should be considered as security-sensitive. They may contain sensitive data which is stored in the files or in the database of the server. It’s important for the administrator of a WordPress server to understand what they contain and to which server they are sent.

WordPress makes it possible to block external requests by setting the WP_HTTP_BLOCK_EXTERNAL option to true. It’s then possible to authorize requests to only a few servers using another option named WP_ACCESSIBLE_HOSTS.

Ask Yourself Whether

  • Your WordPress website contains code which may call external requests to servers you don’t know.
  • Your WordPress website may send sensitive data to other servers.
  • Your WordPress website uses a lot of plugins or themes.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Uninstall WordPress plugins which send requests to servers you don’t know.
  • Make sure that WP_HTTP_BLOCK_EXTERNAL is defined in wp-config.php.
  • Make sure that WP_HTTP_BLOCK_EXTERNAL is set to true.
  • Make sure that WP_ACCESSIBLE_HOSTS is configured to authorize requests to the servers you trust.

Sensitive Code Example

define( 'WP_HTTP_BLOCK_EXTERNAL', false ); // Sensitive

Compliant Solution

define( 'WP_HTTP_BLOCK_EXTERNAL', true );
define( 'WP_ACCESSIBLE_HOSTS', 'api.wordpress.org' );

See

php:S6348

By default, the WordPress administrator and editor roles can add unfiltered HTML content in various places, such as post content. This includes the capability to add JavaScript code.

If an account with such a role gets hijacked, this capability can be used to plant malicious JavaScript code that gets executed whenever somebody visits the website.

Ask Yourself Whether

  • You really need the possibility to add unfiltered HTML with editor or administrator roles.
  • There’s a chance that the accounts of authorized users get compromised.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The unfiltered_html capability should be granted to trusted roles that need to use markup when publishing dynamic content to the WordPress website. If this capability is not required for all users, including administrators and editors roles, then it’s recommended to set DISALLOW_UNFILTERED_HTML to true.

Sensitive Code Example

define( 'DISALLOW_UNFILTERED_HTML', false ); // sensitive

Compliant Solution

define( 'DISALLOW_UNFILTERED_HTML', true );

See

php:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_connect($socket, '8.8.8.8', 23);  // Sensitive

Compliant Solution

$socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
socket_connect($socket, IP_ADDRESS, 23);  // Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

php:S6341

WordPress makes it possible to edit theme and plugin files directly in the Administration Screens. While it may look like an easy way to customize a theme or do a quick change, it’s a dangerous feature. When visiting the theme or plugin editor for the first time, WordPress displays a warning to make it clear that using such a feature may break the web site by mistake. More importantly, users who have access to this feature can trigger the execution of any PHP code and may therefore take full control of the WordPress instance. This security risk could be exploited by an attacker who manages to get access to one of the authorized users. Setting the DISALLOW_FILE_EDIT option to true in wp-config.php disables this risky feature. The default value is false.

Ask Yourself Whether

  • You really need to use the theme and plugin editors.
  • The theme and plugin editors are available to users who cannot be fully trusted.
  • There’s a chance that the accounts of authorized users get compromised.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Modify the theme and plugin files using a local editor and deploy them to the server in a secure way.
  • Make sure that DISALLOW_FILE_EDIT is defined in wp-config.php.
  • Make sure that DISALLOW_FILE_EDIT is set to true.

Sensitive Code Example

define( 'DISALLOW_FILE_EDIT', false ); // Sensitive

Compliant Solution

define( 'DISALLOW_FILE_EDIT', true );

See

php:S6343

Automatic updates are a great way of making sure your application gets security updates as soon as they are available. Once a vendor releases a security update, it is crucial to apply it in a timely manner before malicious actors exploit the vulnerability. Relying on manual updates is usually too late, especially if the application is publicly accessible on the internet.

Ask Yourself Whether

  • there is no specific reason for deactivating all automatic updates.
  • you meant to deactivate only automatic major updates.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Don’t deactivate automatic updates unless you have a good reason to do so. This way, you’ll be sure to receive security updates as soon as they are available. If you are worried about an automatic update breaking something, check if it is possible to only activate automatic updates for minor or security updates.

Sensitive Code Example

define( 'WP_AUTO_UPDATE_CORE', false ); // Sensitive
define( 'AUTOMATIC_UPDATER_DISABLED', true ); // Sensitive

Compliant Solution

define( 'WP_AUTO_UPDATE_CORE', true ); // Minor and major automatic updates enabled
define( 'WP_AUTO_UPDATE_CORE', 'minor' ); // Only minor updates are enabled
define( 'AUTOMATIC_UPDATER_DISABLED', false );

See

php:S6346

WordPress has a database repair and optimization mode that can be activated by setting WP_ALLOW_REPAIR to true in the configuration.

If activated, the repair page can be accessed by any user, authenticated or not. This makes sense because if the database is corrupted, the authentication mechanism might not work.

Malicious users could trigger this potentially costly operation repeatadly slowing down the website, and making it unavailable.

Ask Yourself Whether

  • The database is not currently corrupted.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to enable automatic database repair mode only in case of database corruption. This feature should be deactivated again when the database issue is resolved.

Sensitive Code Example

define( 'WP_ALLOW_REPAIR', true ); // Sensitive

Compliant Solution

// The default value is false, so the value does not have to be expilicitly set.
define( 'WP_ALLOW_REPAIR', false );

See

php:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

Builtin access to $argv

function globfunc() {
    global $argv; // Sensitive. Reference to global $argv
    foreach ($argv as $arg) { // Sensitive.
        // ...
    }
}

function myfunc($argv) {
    $param = $argv[0]; // OK. Reference to local $argv parameter
    // ...
}

foreach ($argv as $arg) { // Sensitive. Reference to $argv.
    // ...
}

$myargv = $_SERVER['argv']; // Sensitive. Equivalent to $argv.

function serve() {
    $myargv = $_SERVER['argv']; // Sensitive.
    // ...
}

myfunc($argv); // Sensitive

$myvar = $HTTP_SERVER_VARS[0]; // Sensitive. Note: HTTP_SERVER_VARS has ben removed since PHP 5.4.

$options = getopt('a:b:'); // Sensitive. Parsing arguments.

$GLOBALS["argv"]; // Sensitive. Equivalent to $argv.

function myglobals() {
    $GLOBALS["argv"]; // Sensitive
}

$argv = [1,2,3]; // Sensitive. It is a bad idea to override argv.

Zend Console

new Zend\Console\Getopt(['myopt|m' => 'this is an option']); // Sensitive

Getopt-php library

new \GetOpt\Option('m', 'myoption', \GetOpt\GetOpt::REQUIRED_ARGUMENT); // Sensitive

See

php:S4828

Signaling processes or process groups can seriously affect the stability of this application or other applications on the same system.

Accidentally setting an incorrect PID or signal or allowing untrusted sources to assign arbitrary values to these parameters may result in a denial of service.

Also, the system treats the signal differently if the destination PID is less than or equal to 0. This different behavior may affect multiple processes with the same (E)UID simultaneously if the call is left uncontrolled.

Ask Yourself Whether

  • The parameters pid and sig are untrusted (they come from an external source).
  • This function is triggered by non-administrators.
  • Signal handlers on the target processes stop important functions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For stateful applications with user management, ensure that only administrators trigger this code.
  • Verify that the pid and sig parameters are correct before using them.
  • Ensure that the process sending the signals runs with as few OS privileges as possible.
  • Isolate the process on the system based on its (E)UID.
  • Ensure that the signal does not interrupt any essential functions when intercepted by a target’s signal handlers.

Sensitive Code Example

$targetPid = (int)$_GET["pid"];
posix_kill($targetPid, 9); // Sensitive

Compliant Solution

$targetPid = (int)$_GET["pid"];

// Validate the untrusted PID,
// With a pre-approved list or authorization checks
if (isValidPid($targetPid)) {
    posix_kill($targetPid, 9);
}

See

php:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

// Any reference to STDIN is Sensitive
$varstdin = STDIN; // Sensitive
stream_get_line(STDIN, 40); // Sensitive
stream_copy_to_stream(STDIN, STDOUT); // Sensitive
// ...


// Except those references as they can't create an injection vulnerability.
ftruncate(STDIN, 5); // OK
ftell(STDIN); // OK
feof(STDIN); // OK
fseek(STDIN, 5); // OK
fclose(STDIN); // OK


// STDIN can also be referenced like this
$mystdin = 'php://stdin'; // Sensitive

file_get_contents('php://stdin'); // Sensitive
readfile('php://stdin'); // Sensitive

$input = fopen('php://stdin', 'r'); // Sensitive
fclose($input); // OK

See

php:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in cURL

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by setting CURLOPT_SSL_VERIFYPEER to false. To enable validation set the value to true or do not set CURLOPT_SSL_VERIFYPEER at all to use the secure default value.

Noncompliant code example

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false); // Noncompliant
curl_exec($curl);
curl_close($curl);

Compliant solution

$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'https://example.com/');
curl_exec($curl);
curl_close($curl);

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

php:S6339

Why is this an issue?

Secret keys are used in combination with an algorithm to encrypt data. A typical use case is an authentication system. For such a system to be secure, the secret key should have a value which cannot be guessed and which is long enough to not be vulnerable to brute-force attacks.

A "salt" is an extra piece of data which is included when hashing data such as a password. Its value should have the same properties as a secret key.

This rule raises an issue when it detects that a secret key or a salt has a predictable value or that it’s not long enough.

Noncompliant code example

WordPress:

define('AUTH_KEY', 'hello'); // Noncompliant
define('AUTH_SALT', 'hello'); // Noncompliant
define('AUTH_KEY', 'put your unique phrase here'); // Noncompliant, this is the default value

Compliant solution

WordPress:

define('AUTH_KEY', 'D&ovlU#|CvJ##uNq}bel+^MFtT&.b9{UvR]g%ixsXhGlRJ7q!h}XWdEC[BOKXssj');
define('AUTH_SALT', 'FIsAsXJKL5ZlQo)iD-pt??eUbdc{_Cn<4!d~yqz))&B D?AwK%)+)F2aNwI|siOe');

Resources

php:S2092

When a cookie is protected with the secure attribute set to true it will not be send by the browser over an unencrypted HTTP request and thus cannot be observed by an unauthorized person during a man-in-the-middle attack.

Ask Yourself Whether

  • the cookie is for instance a session-cookie not designed to be sent over non-HTTPS communication.
  • it’s not sure that the website contains mixed content or not (ie HTTPS everywhere or not)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • It is recommended to use HTTPs everywhere so setting the secure flag to true should be the default behaviour when creating cookies.
  • Set the secure flag to true for session-cookies.

Sensitive Code Example

In php.ini you can specify the flags for the session cookie which is security-sensitive:

session.cookie_secure = 0; // Sensitive: this security-sensitive session cookie is created with the secure flag set to false (cookie_secure = 0)

Same thing in PHP code:

session_set_cookie_params($lifetime, $path, $domain, false);
// Sensitive: this security-sensitive session cookie is created with the secure flag (the fourth argument) set to _false_

If you create a custom security-sensitive cookie in your PHP code:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, false);  // Sensitive: a security-sensitive cookie is created with the secure flag  (the sixth argument) set to _false_

By default setcookie and setrawcookie functions set the sixth argument / secure flag to false:

$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain);  // Sensitive: a security-sensitive cookie is created with the secure flag (the sixth argument) not defined (by default to false)
setrawcookie($name, $value, $expire, $path, $domain);  // Sensitive: a security-sensitive cookie is created with the secure flag (the sixth argument) not defined (by default to false)

Compliant Solution

session.cookie_secure = 1; // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to cookie_secure property set to 1
session_set_cookie_params($lifetime, $path, $domain, true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the fouth argument) set to true
$value = "sensitive data";
setcookie($name, $value, $expire, $path, $domain, true); // Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the sixth  argument) set to true
setrawcookie($name, $value, $expire, $path, $domain, true);// Compliant: the sensitive cookie will not be send during an unencrypted HTTP request thanks to the secure flag (the sixth argument) set to true

See

php:S4834

This rule is deprecated, and will eventually be removed.

The access control of an application must be properly implemented in order to restrict access to resources to authorized entities otherwise this could lead to vulnerabilities:

Granting correct permissions to users, applications, groups or roles and defining required permissions that allow access to a resource is sensitive, must therefore be done with care. For instance, it is obvious that only users with administrator privilege should be authorized to add/remove the administrator permission of another user.

Ask Yourself Whether

  • Granted permission to an entity (user, application) allow access to information or functionalities not needed by this entity.
  • Privileges are easily acquired (eg: based on the location of the user, type of device used, defined by third parties, does not require approval …​).
  • Inherited permission, default permission, no privileges (eg: anonymous user) is authorized to access to a protected resource.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

At minimum, an access control system should:

  • Use a well-defined access control model like RBAC or ACL.
  • Entities' permissions should be reviewed regularly to remove permissions that are no longer needed.
  • Respect the principle of least privilege ("an entity has access only the information and resources that are necessary for its legitimate purpose").

Sensitive Code Example

CakePHP

use Cake\Auth\BaseAuthorize;
use Cake\Controller\Controller;

abstract class MyAuthorize extends BaseAuthorize { // Sensitive. Method extending Cake\Auth\BaseAuthorize.
    // ...
}

// Note that "isAuthorized" methods will only be detected in direct subclasses of Cake\Controller\Controller.
abstract class MyController extends Controller {
    public function isAuthorized($user) { // Sensitive. Method called isAuthorized in a Cake\Controller\Controller.
        return false;
    }
}

See

php:S5122

Having a permissive Cross-Origin Resource Sharing policy is security-sensitive. It has led in the past to the following vulnerabilities:

Same origin policy in browsers prevents, by default and for security-reasons, a javascript frontend to perform a cross-origin HTTP request to a resource that has a different origin (domain, protocol, or port) from its own. The requested target can append additional HTTP headers in response, called CORS, that act like directives for the browser and change the access control policy / relax the same origin policy.

Ask Yourself Whether

  • You don’t trust the origin specified, example: Access-Control-Allow-Origin: untrustedwebsite.com.
  • Access control policy is entirely disabled: Access-Control-Allow-Origin: *
  • Your access control policy is dynamically defined by a user-controlled input like origin header.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • The Access-Control-Allow-Origin header should be set only for a trusted origin and for specific resources.
  • Allow only selected, trusted domains in the Access-Control-Allow-Origin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks).

Sensitive Code Example

PHP built-in header function:

header("Access-Control-Allow-Origin: *"); // Sensitive

Laravel:

response()->header('Access-Control-Allow-Origin', "*"); // Sensitive

Symfony:

use Symfony\Component\HttpFoundation\Response;

$response = new Response(
    'Content',
    Response::HTTP_OK,
    ['Access-Control-Allow-Origin' => '*'] // Sensitive
);
$response->headers->set('Access-Control-Allow-Origin', '*'); // Sensitive

User-controlled origin:

use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\HttpFoundation\Request;

$origin = $request->headers->get('Origin');

$response->headers->set('Access-Control-Allow-Origin', $origin); // Sensitive

Compliant Solution

PHP built-in header function:

header("Access-Control-Allow-Origin: $trusteddomain");

Laravel:

response()->header('Access-Control-Allow-Origin', $trusteddomain);

Symfony:

use Symfony\Component\HttpFoundation\Response;

$response = new Response(
    'Content',
    Response::HTTP_OK,
    ['Access-Control-Allow-Origin' => $trusteddomain]
);

$response->headers->set('Access-Control-Allow-Origin', $trusteddomain);

User-controlled origin validated with an allow-list:

use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\HttpFoundation\Request;

$origin = $request->headers->get('Origin');

if (in_array($origin, $trustedOrigins)) {
    $response->headers->set('Access-Control-Allow-Origin', $origin);
}

See

php:S5808

When granting users access to resources of an application, such an authorization should be based on strong decisions. For instance, a user may be authorized to access a resource only if they are authenticated, or if they have the correct role and privileges.

Why is this an issue?

Access control is a critical aspect of web frameworks that ensures proper authorization and restricts access to sensitive resources or actions. To enable access control, web frameworks offer components that are responsible for evaluating user permissions and making access control decisions. They might examine the user’s credentials, such as roles or privileges, and compare them against predefined rules or policies to determine whether the user should be granted access to a specific resource or action.

Conventionally, these checks should never grant access to every request received. If an endpoint or component is meant to be public, then it should be ignored by access control components. Conversely, if an endpoint should deny some users from accessing it, then access control has to be configured correctly for this endpoint.

Granting unrestricted access to all users can lead to security vulnerabilities and potential misuse of critical functionalities. It is important to carefully assess access decisions based on factors such as user roles, resource sensitivity, and business requirements. Implementing a robust and granular access control mechanism is crucial for the security and integrity of the web application itself and its surrounding environment.

What is the potential impact?

Not verifying user access strictly can introduce significant security risks. Some of the most prominent risks are listed below. Depending on the use case, it is very likely that other risks are introduced on top of the ones listed.

Unauthorized access

As the access of users is not checked strictly, it becomes very easy for an attacker to gain access to restricted areas or functionalities, potentially compromising the confidentiality, integrity, and availability of sensitive resources. They may exploit this access to perform malicious actions, such as modifying or deleting data, impersonating legitimate users, or gaining administrative privileges, ultimately compromising the security of the system.

Theft of sensitive data

Theft of sensitive data can result from incorrect access control if attackers manage to gain access to databases, file systems, or other storage mechanisms where sensitive data is stored. This can lead to the theft of personally identifiable information (PII), financial data, intellectual property, or other confidential information. The stolen data can be used for various malicious purposes, such as identity theft, financial fraud, or selling the data on the black market, causing significant harm to individuals and organizations affected by the breach.

How to fix it in Symfony

Code examples

Noncompliant code example

The vote method of a VoterInterface implementation is not compliant when it returns only an affirmative decision (ACCESS_GRANTED):

class NoncompliantVoter implements VoterInterface
{
    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        return self::ACCESS_GRANTED; // Noncompliant
    }
}

The voteOnAttribute method of a Voter class is not compliant when it returns only an affirmative decision (true):

class NoncompliantVoter extends Voter
{
    protected function supports(string $attribute, $subject)
    {
        return true;
    }

    protected function voteOnAttribute(string $attribute, $subject, TokenInterface $token)
    {
        return true; // Noncompliant
    }
}

Compliant solution

The vote method of a VoterInterface type should return a negative decision (ACCESS_DENIED) or abstain from making a decision (ACCESS_ABSTAIN):

class CompliantVoter implements VoterInterface
{
    public function vote(TokenInterface $token, $subject, array $attributes)
    {
        if (foo()) {
            return self::ACCESS_GRANTED;
        } else if (bar()) {
            return self::ACCESS_ABSTAIN;
        }
        return self::ACCESS_DENIED;
    }
}

The voteOnAttribute method of a Voter type should return a negative decision (false):

class CompliantVoter extends Voter
{
    protected function supports(string $attribute, $subject)
    {
        return true;
    }

    protected function voteOnAttribute(string $attribute, $subject, TokenInterface $token)
    {
        if (foo()) {
            return true;
        }
        return false;
    }
}

Resources

Standards

Web:S5148

A newly opened window having access back to the originating window could allow basic phishing attacks (the window.opener object is not null and thus window.opener.location can be set to a malicious website by the opened page).

For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which could trick real users to enter their credentials.

Ask Yourself Whether

  • The application opens untrusted external URL.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use noopener to prevent untrusted pages from abusing window.opener.

Note: In Chrome 88+, Firefox 79+ or Safari 12.1+ target=_blank on anchors implies rel=noopener which make the protection enabled by default.

Sensitive Code Example

<a href="http://example.com/dangerous" target="_blank"> <!-- Sensitive -->

<a href="{{variable}}" target="_blank"> <!-- Sensitive -->

Compliant Solution

To prevent pages from abusing window.opener, use rel=noopener on <a href=> to force its value to be null on the opened pages.

<a href="http://petssocialnetwork.io" target="_blank" rel="noopener">

Exceptions

No Issue will be raised when href contains a hardcoded relative url as there it has less chances of being vulnerable. An url is considered hardcoded and relative if it doesn’t start with http:// or https://, and if it does not contain any of the characters {}$()[]

<a href="internal.html" target="_blank" >

See

Web:S5247

To reduce the risk of cross-site scripting attacks, templating systems, such as Twig, Django, Smarty, Groovy's template engine, allow configuration of automatic variable escaping before rendering templates. When escape occurs, characters that make sense to the browser (eg: <a>) will be transformed/replaced with escaped/sanitized values (eg: & lt;a& gt; ).

Auto-escaping is not a magic feature to annihilate all cross-site scripting attacks, it depends on the strategy applied and the context, for example a "html auto-escaping" strategy (which only transforms html characters into html entities) will not be relevant when variables are used in a html attribute because ':' character is not escaped and thus an attack as below is possible:

<a href="{{ myLink }}">link</a> // myLink = javascript:alert(document.cookie)
<a href="javascript:alert(document.cookie)">link</a> // JS injection (XSS attack)

Ask Yourself Whether

  • Templates are used to render web content and
    • dynamic variables in templates come from untrusted locations or are user-controlled inputs
    • there is no local mechanism in place to sanitize or validate the inputs.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Enable auto-escaping by default and continue to review the use of inputs in order to be sure that the chosen auto-escaping strategy is the right one.

Sensitive Code Example

<!-- Django templates -->
<p>{{ variable|safe }}</p><!-- Sensitive -->
{% autoescape off %}<!-- Sensitive -->

<!-- Jinja2 templates -->
<p>{{ variable|safe }}</p><!-- Sensitive -->
{% autoescape false %}<!-- Sensitive -->

See

Web:S5725

Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application.

On the client side, where front-end code is executed, malicious code could:

  • impersonate users' identities and take advantage of their privileges on the application.
  • add quiet malware that monitors users' session and capture sensitive secrets.
  • gain access to sensitive clients' personal data.
  • deface, or otherwise affect the general availability of the application.
  • mine cryptocurrencies in the background.

Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:

  • access and modify sensitive technical and business data.
  • elevate its privileges on the underlying operating system.
  • Use the compromised application as a pivot to attack the local network.

By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes applied to it before it is downloaded.
Especially, integrity checks will allow for identifying an artifact replaced by malware on the publication website or that was legitimately changed by its author, in a more benign scenario.

Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself.

Ask Yourself Whether

  • The artifact is a file intended to execute code.
  • The artifact is a file that is intended to configure or affect running code in some way.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed.

In this case, the artifact’s hash must:

  • Be computed with a secure hash algorithm such as SHA512, SHA384 or SHA256.
  • Be compared with a secure hash that was not downloaded from the same source.

To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings.

Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes.

Sensitive Code Example

The following code sample uses neither integrity checks nor version pinning:

<script
    src="https://cdn.example.com/latest/script.js"
></script> <!-- Sensitive -->

Compliant Solution

<script
    src="https://cdn.example.com/v5.3.6/script.js"
    integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
></script>

See

ruby:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = "192.168.12.42"; // Sensitive

Compliant Solution

ip = IP_ADDRESS; // Compliant

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non-routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID)
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the range 2001:db8::/32, reserved for documentation purposes by RFC 3849

See

ruby:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

See

javascript:S5732

Clickjacking attacks occur when an attacker try to trick an user to click on certain buttons/links of a legit website. This attack can take place with malicious HTML frames well hidden in an attacker website.

For instance, suppose a safe and authentic page of a social network (https://socialnetworkexample.com/makemyprofilpublic) which allows an user to change the visibility of his profile by clicking on a button. This is a critical feature with high privacy concerns. Users are generally well informed on the social network of the consequences of this action. An attacker can trick users, without their consent, to do this action with the below embedded code added on a malicious website:

<html>
<b>Click on the button below to win 5000$</b>
<br>
<iframe src="https://socialnetworkexample.com/makemyprofilpublic" width="200" height="200"></iframe>
</html>

Playing with the size of the iframe it’s sometimes possible to display only the critical parts of a page, in this case the button of the makemyprofilpublic page.

Ask Yourself Whether

  • Critical actions of the application are prone to clickjacking attacks because a simple click on a link or a button can trigger them.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy frame-ancestors directive which is supported by all modern browsers and will specify the origins of frame allowed to be loaded by the browser (this directive deprecates X-Frame-Options).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the frameAncestors directive (or if frameAncestors is set to 'none'):

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'none'"] // Sensitive: frameAncestors  is set to none
    }
  })
);

Compliant Solution

In Express.js application a standard way to implement CSP frame-ancestors directive is the helmet-csp or helmet middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      // other directives
      frameAncestors: ["'example.com'"] // Compliant
    }
  })
);

See

javascript:S5734

MIME confusion attacks occur when an attacker successfully tricks a web-browser to interpret a resource as a different type than the one expected. To correctly interpret a resource (script, image, stylesheet …​) web browsers look for the Content-Type header defined in the HTTP response received from the server, but often this header is not set or is set with an incorrect value. To avoid content-type mismatch and to provide the best user experience, web browsers try to deduce the right content-type, generally by inspecting the content of the resources (the first bytes). This "guess mechanism" is called MIME type sniffing.

Attackers can take advantage of this feature when a website ("example.com" here) allows to upload arbitrary files. In that case, an attacker can upload a malicious image fakeimage.png (containing malicious JavaScript code or a polyglot content file) such as:

<script>alert(document.cookie)</script>

When the victim will visit the website showing the uploaded image, the malicious script embedded into the image will be executed by web browsers performing MIME type sniffing.

Ask Yourself Whether

  • Content-Type header is not systematically set for all resources.
  • Content of resources can be controlled by users.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Implement X-Content-Type-Options header with nosniff value (the only existing value for this header) which is supported by all modern browsers and will prevent browsers from performing MIME type sniffing, so that in case of Content-Type header mismatch, the resource is not interpreted. For example within a <script> object context, JavaScript MIME types are expected (like application/javascript) in the Content-Type header.

Sensitive Code Example

In Express.js application the code is sensitive if, when using helmet, the noSniff middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet({
    noSniff: false, // Sensitive
  })
);

Compliant Solution

When using helmet in an Express.js application, the noSniff middleware should be enabled (it is also done by default):

const express = require('express');
const helmet= require('helmet');

let app = express();

app.use(helmet.noSniff());

See

javascript:S5730

A mixed-content is when a resource is loaded with the HTTP protocol, from a website accessed with the HTTPs protocol, thus mixed-content are not encrypted and exposed to MITM attacks and could break the entire level of protection that was desired by implementing encryption with the HTTPs protocol.

The main threat with mixed-content is not only the confidentiality of resources but the whole website integrity:

  • A passive mixed-content (eg: <img src="http://example.com/picture.png">) allows an attacker to access and replace only these resources, like images, with malicious ones that could lead to successful phishing attacks.
  • With active mixed-content (eg: <script src="http://example.com/library.js">) an attacker can compromise the entire website by injecting malicious javascript code for example (accessing and modifying the DOM, steal cookies, etc).

Ask Yourself Whether

  • The HTTPS protocol is in place and external resources are fetched from the website pages.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy block-all-mixed-content directive which is supported by all modern browsers and will block loading of mixed-contents.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet-csp or helmet middleware is used without the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com']
    } // Sensitive: blockAllMixedContent directive is missing
  })
);

Compliant Solution

In Express.js application a standard way to block mixed-content is to put in place the helmet-csp or helmet middleware with the blockAllMixedContent directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.contentSecurityPolicy({
    directives: {
      "default-src": ["'self'", 'example.com', 'code.jquery.com'],
      blockAllMixedContent: [] // Compliant
    }
  })
);

See

javascript:S5736

HTTP header referer contains a URL set by web browsers and used by applications to track from where the user came from, it’s for instance a relevant value for web analytic services, but it can cause serious privacy and security problems if the URL contains confidential information. Note that Firefox for instance, to prevent data leaks, removes path information in the Referer header while browsing privately.

Suppose an e-commerce website asks the user his credit card number to purchase a product:

<html>
<body>
<form action="/valid_order" method="GET">
Type your credit card number to purchase products:
<input type=text id="cc" value="1111-2222-3333-4444">
<input type=submit>
</form>
</body>

When submitting the above HTML form, a HTTP GET request will be performed, the URL requested will be https://example.com/valid_order?cc=1111-2222-3333-4444 with credit card number inside and it’s obviously not secure for these reasons:

  • URLs are stored in the history of browsers.
  • URLs could be accidentally shared when doing copy/paste actions.
  • URLs can be stolen if a malicious person looks at the computer screen of an user.

In addition to these threats, when further requests will be performed from the "valid_order" page with a simple legitimate embedded script like that:

<script src="https://webanalyticservices_example.com/track">

The referer header which contains confidential information will be send to a third party web analytic service and cause privacy issue:

GET /track HTTP/2.0
Host: webanalyticservices_example.com
Referer: https://example.com/valid_order?cc=1111-2222-3333-4444

Ask Yourself Whether

  • Confidential information exists in URLs.
  • Semantic of HTTP methods is not respected (eg: use of a GET method instead of POST when the state of the application is changed).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Confidential information should not be set inside URLs (GET requests) of the application and a safe (ie: different from unsafe-url or no-referrer-when-downgrade) referrer-Policy header, to control how much information is included in the referer header, should be used.

Sensitive Code Example

In Express.js application the code is sensitive if the helmet referrerPolicy middleware is disabled or used with no-referrer-when-downgrade or unsafe-url:

const express = require('express');
const helmet = require('helmet');

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer-when-downgrade' // Sensitive: no-referrer-when-downgrade is used
  })
);

Compliant Solution

In Express.js application a secure solution is to user the helmet referrer policy middleware set to no-referrer:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.referrerPolicy({
    policy: 'no-referrer' // Compliant
  })
);

See

javascript:S5739

When implementing the HTTPS protocol, the website mostly continue to support the HTTP protocol to redirect users to HTTPS when they request a HTTP version of the website. These redirects are not encrypted and are therefore vulnerable to man in the middle attacks. The Strict-Transport-Security policy header (HSTS) set by an application instructs the web browser to convert any HTTP request to HTTPS.

Web browsers that see the Strict-Transport-Security policy header for the first time record information specified in the header:

  • the max-age directive which specify how long the policy should be kept on the web browser.
  • the includeSubDomains optional directive which specify if the policy should apply on all sub-domains or not.
  • the preload optional directive which is not part of the HSTS specification but supported on all modern web browsers.

With the preload directive the web browser never connects in HTTP to the website and to use this directive, it is required to submit the concerned application to a preload service maintained by Google.

Ask Yourself Whether

  • The website is accessible with the unencrypted HTTP protocol.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Strict-Transport-Security policy header, it is recommended to apply this policy to all subdomains (includeSubDomains) and for at least 6 months (max-age=15552000) or even better for 1 year (max-age=31536000).

Sensitive Code Example

In Express.js application the code is sensitive if the helmet or hsts middleware are disabled or used without recommended values:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 3153600, // Sensitive, recommended >= 15552000
  includeSubDomains: false // Sensitive, recommended 'true'
}));

Compliant Solution

In Express.js application a standard way to implement HSTS is with the helmet or hsts middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.hsts({
  maxAge: 31536000,
  includeSubDomains: true
})); // Compliant

See

javascript:S5743

This rule is deprecated, and will eventually be removed.

By default, web browsers perform DNS prefetching to reduce latency due to DNS resolutions required when an user clicks links from a website page.

For instance on example.com the hyperlink below contains a cross-origin domain name that must be resolved to an IP address by the web browser:

<a href="https://otherexample.com">go on our partner website</a>

It can add significant latency during requests, especially if the page contains many links to cross-origin domains. DNS prefetch allows web browsers to perform DNS resolving in the background before the user clicks a link. This feature can cause privacy issues because DNS resolving from the user’s computer is performed without his consent if he doesn’t intent to go to the linked website.

On a complex private webpage, a combination "of unique links/DNS resolutions" can indicate, to a eavesdropper for instance, that the user is visiting the private page.

Ask Yourself Whether

  • Links to cross-origin domains could result in leakage of confidential information about the user’s navigation/behavior of the website.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement X-DNS-Prefetch-Control header with an off value but this could significantly degrade website performances.

Sensitive Code Example

In Express.js application the code is sensitive if the dns-prefetch-control middleware is disabled or used without the recommended value:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: true // Sensitive: allowing DNS prefetching is security-sensitive
  })
);

Compliant Solution

In Express.js application the dns-prefetch-control or helmet middleware is the standard way to implement X-DNS-Prefetch-Control header:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
  helmet.dnsPrefetchControl({
    allow: false // Compliant
  })
);

See

javascript:S5852

Most of the regular expression engines use backtracking to try all possible execution paths of the regular expression when evaluating an input, in some cases it can cause performance issues, called catastrophic backtracking situations. In the worst case, the complexity of the regular expression is exponential in the size of the input, this means that a small carefully-crafted input (like 20 chars) can trigger catastrophic backtracking and cause a denial of service of the application. Super-linear regex complexity can lead to the same impact too with, in this case, a large carefully-crafted input (thousands chars).

This rule determines the runtime complexity of a regular expression and informs you if it is not linear.

Ask Yourself Whether

  • The input is user-controlled.
  • The input size is not restricted to a small number of characters.
  • There is no timeout in place to limit the regex evaluation time.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

To avoid catastrophic backtracking situations, make sure that none of the following conditions apply to your regular expression.

In all of the following cases, catastrophic backtracking can only happen if the problematic part of the regex is followed by a pattern that can fail, causing the backtracking to actually happen.

  • If you have a repetition r* or r*?, such that the regex r could produce different possible matches (of possibly different lengths) on the same input, the worst case matching time can be exponential. This can be the case if r contains optional parts, alternations or additional repetitions (but not if the repetition is written in such a way that there’s only one way to match it).
  • If you have multiple repetitions that can match the same contents and are consecutive or are only separated by an optional separator or a separator that can be matched by both of the repetitions, the worst case matching time can be polynomial (O(n^c) where c is the number of problematic repetitions). For example a*b* is not a problem because a* and b* match different things and a*_a* is not a problem because the repetitions are separated by a '_' and can’t match that '_'. However, a*a* and .*_.* have quadratic runtime.
  • If the regex is not anchored to the beginning of the string, quadratic runtime is especially hard to avoid because whenever a match fails, the regex engine will try again starting at the next index. This means that any unbounded repetition, if it’s followed by a pattern that can fail, can cause quadratic runtime on some inputs. For example str.split(/\s*,/) will run in quadratic time on strings that consist entirely of spaces (or at least contain large sequences of spaces, not followed by a comma).

In order to rewrite your regular expression without these patterns, consider the following strategies:

  • If applicable, define a maximum number of expected repetitions using the bounded quantifiers, like {1,5} instead of + for instance.
  • Refactor nested quantifiers to limit the number of way the inner group can be matched by the outer quantifier, for instance this nested quantifier situation (ba+)+ doesn’t cause performance issues, indeed, the inner group can be matched only if there exists exactly one b char per repetition of the group.
  • Optimize regular expressions by emulating possessive quantifiers and atomic grouping.
  • Use negated character classes instead of . to exclude separators where applicable. For example the quadratic regex .*_.* can be made linear by changing it to [^_]*_.*

Sometimes it’s not possible to rewrite the regex to be linear while still matching what you want it to match. Especially when the regex is not anchored to the beginning of the string, for which it is quite hard to avoid quadratic runtimes. In those cases consider the following approaches:

  • Solve the problem without regular expressions
  • Use an alternative non-backtracking regex implementations such as Google’s RE2 or node-re2.
  • Use multiple passes. This could mean pre- and/or post-processing the string manually before/after applying the regular expression to it or using multiple regular expressions. One example of this would be to replace str.split(/\s*,\s*/) with str.split(",") and then trimming the spaces from the strings as a second step.
  • It is often possible to make the regex infallible by making all the parts that could fail optional, which will prevent backtracking. Of course this means that you’ll accept more strings than intended, but this can be handled by using capturing groups to check whether the optional parts were matched or not and then ignoring the match if they weren’t. For example the regex x*y could be replaced with x*(y)? and then the call to str.match(regex) could be replaced with matched = str.match(regex) and matched[1] !== undefined.

Sensitive Code Example

The regex evaluation will never end:

/(a+)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Sensitive

Compliant Solution

Possessive quantifiers do not keep backtracking positions, thus can be used, if possible, to avoid performance issues. Unfortunately, they are not supported in JavaScript, but one can still mimick them using lookahead assertions and backreferences:

/((?=(a+))\2)+$/.test(
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"+
"aaaaaaaaaaaaaaa!"
); // Compliant

See

javascript:S2598

Why is this an issue?

If the file upload feature is implemented without proper folder restriction, it will result in an implicit trust violation within the server, as trusted files will be implicitly stored alongside third-party files that should be considered untrusted.

This can allow an attacker to disrupt the security of an internal server process or the running application.

What is the potential impact?

After discovering this vulnerability, attackers may attempt to upload as many different file types as possible, such as javascript files, bash scripts, malware, or malicious configuration files targeting potential processes.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Full application compromise

In the worst-case scenario, the attackers succeed in uploading a file recognized by in an internal tool, triggering code execution.

Depending on the attacker, code execution can be used with different intentions:

  • Download the internal server’s data, most likely to sell it.
  • Modify data, install malware, for instance, malware that mines cryptocurrencies.
  • Stop services or exhaust resources, for instance, with fork bombs.

Server Resource Exhaustion

By repeatedly uploading large files, an attacker can consume excessive server resources, resulting in a denial of service.

If the component affected by this vulnerability is not a bottleneck that acts as a single point of failure (SPOF) within the application, the denial of service can only affect the attacker who caused it.

Even though a denial of service might have little direct impact, it can have secondary impact in architectures that use containers and container orchestrators. For example, it can cause unexpected container failures or overuse of resources.

In some cases, it is also possible to force the product to "fail open" when resources are exhausted, which means that some security features are disabled in an emergency.

These threats are particularly insidious if the attacked organization does not maintain a disaster recovery plan (DRP).

How to fix it in Formidable

Code examples

Noncompliant code example

const Formidable = require('formidable');

const form          = new Formidable(); // Noncompliant
form.uploadDir      = "/tmp/";
form.keepExtensions = true;

Compliant solution

const Formidable = require('formidable');

const form          = new Formidable();
form.uploadDir      = "/uploads/";
form.keepExtensions = false;

How does this work?

Use pre-approved folders

Create a special folder where untrusted data should be stored. This folder should be classified as untrusted and have the following characteristics:

  • It should have specific read and write permissions that belong to the right people or organizations.
  • It should have a size limit or its size should be monitored.
  • It should contain backup copies if it contains data that belongs to users.

This folder should not be located in /tmp, /var/tmp or in the Windows directory %TEMP%.
These folders are usually "world-writable", can be manipulated, and can be accidentally deleted by the system.

Also, the original file names and extensions should be changed to controlled strings to prevent unwanted code from being executed based on the file names.

Resources

javascript:S5742

Certificate Transparency (CT) is an open-framework to protect against identity theft when certificates are issued. Certificate Authorities (CA) electronically sign certificate after verifying the identify of the certificate owner. Attackers use, among other things, social engineering attacks to trick a CA to correctly verifying a spoofed identity/forged certificate.

CAs implement Certificate Transparency framework to publicly log the records of newly issued certificates, allowing the public and in particular the identity owner to monitor these logs to verify that his identify was not usurped.

Ask Yourself Whether

  • The website identity is valuable and well-known, therefore prone to theft.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement Expect-CT HTTP header which instructs the web browser to check public CT logs in order to verify if the website appears inside and if it is not, the browser will block the request and display a warning to the user.

Sensitive Code Example

In Express.js application the code is sensitive if the expect-ct middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(
    helmet({
      expectCt: false // Sensitive
    })
);

Compliant Solution

In Express.js application the expect-ct middleware is the standard way to implement expect-ct. Usually, the deployment of this policy starts with the report only mode (enforce: false) and with a low maxAge (the number of seconds the policy will apply) value and next if everything works well it is recommended to block future connections that violate Expect-CT policy (enforce: true) and greater value for maxAge directive:

const express = require('express');
const helmet = require('helmet');

let app = express();

app.use(helmet.expectCt({
  enforce: true,
  maxAge: 86400
})); // Compliant

See

javascript:S4502

A cross-site request forgery (CSRF) attack occurs when a trusted user of a web application can be forced, by an attacker, to perform sensitive actions that he didn’t intend, such as updating his profile or sending a message, more generally anything that can change the state of the application.

The attacker can trick the user/victim to click on a link, corresponding to the privileged action, or to visit a malicious web site that embeds a hidden web request and as web browsers automatically include cookies, the actions can be authenticated and sensitive.

Ask Yourself Whether

  • The web application uses cookies to authenticate users.
  • There exist sensitive operations in the web application that can be performed when the user is authenticated.
  • The state / resources of the web application can be modified by doing HTTP POST or HTTP DELETE requests for example.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Protection against CSRF attacks is strongly recommended:
    • to be activated by default for all unsafe HTTP methods.
    • implemented, for example, with an unguessable CSRF token
  • Of course all sensitive operations should not be performed with safe HTTP methods like GET which are designed to be used only for information retrieval.

Sensitive Code Example

Express.js CSURF middleware protection is not found on an unsafe HTTP method like POST method:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie: true });

let app = express();

// Sensitive: this operation doesn't look like protected by CSURF middleware (csrfProtection is not used)
app.post('/money_transfer', parseForm, function (req, res) {
  res.send('Money transferred');
});

Protection provided by Express.js CSURF middleware is globally disabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["POST", "GET"] })); // Sensitive as POST is unsafe method

Compliant Solution

Express.js CSURF middleware protection is used on unsafe methods:

let csrf = require('csurf');
let express = require('express');

let csrfProtection = csrf({ cookie:  true });

let app = express();

app.post('/money_transfer', parseForm, csrfProtection, function (req, res) { // Compliant
  res.send('Money transferred')
});

Protection provided by Express.js CSURF middleware is enabled on unsafe methods:

let csrf = require('csurf');
let express = require('express');

app.use(csrf({ cookie: true, ignoreMethods: ["GET"] })); // Compliant

See

javascript:S4507

Development tools and frameworks usually have options to make debugging easier for developers. Although these features are useful during development, they should never be enabled for applications deployed in production. Debug instructions or error messages can leak detailed information about the system, like the application’s path or file names.

Ask Yourself Whether

  • The code or configuration enabling the application debug features is deployed on production servers or distributed to end users.
  • The application runs by default with debug features activated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Do not enable debugging features on production servers or applications distributed to end users.

Sensitive Code Example

errorhandler Express.js middleware should not be used in production:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();
app.use(errorhandler()); // Sensitive

Compliant Solution

errorhandler Express.js middleware used only in development mode:

const express = require('express');
const errorhandler = require('errorhandler');

let app = express();

if (process.env.NODE_ENV === 'development') {
  app.use(errorhandler());
}

See

javascript:S5604

Powerful features are browser features (geolocation, camera, microphone …​) that can be accessed with JavaScript API and may require a permission granted by the user. These features can have a high impact on privacy and user security thus they should only be used if they are really necessary to implement the critical parts of an application.

This rule highlights intrusive permissions when requested with the future standard (but currently experimental) web browser query API and specific APIs related to the permission. It is highly recommended to customize this rule with the permissions considered as intrusive in the context of the web application.

Ask Yourself Whether

  • Some powerful features used by the application are not really necessary.
  • Users are not clearly informed why and when powerful features are used by the application.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In order to respect user privacy it is recommended to avoid using intrusive powerful features.

Sensitive Code Example

When using geolocation API, Firefox for example retrieves personal information like nearby wireless access points and IP address and sends it to the default geolocation service provider, Google Location Services:

navigator.permissions.query({name:"geolocation"}).then(function(result) {
});  // Sensitive: geolocation is a powerful feature with high privacy concerns

navigator.geolocation.getCurrentPosition(function(position) {
  console.log("coordinates x="+position.coords.latitude+" and y="+position.coords.longitude);
}); // Sensitive: geolocation is a powerful feature with high privacy concerns

Compliant Solution

If geolocation is required, always explain to the user why the application needs it and prefer requesting an approximate location when possible:

<html>
<head>
    <title>
        Retailer website example
    </title>
</head>
<body>
    Type a city, street or zip code where you want to retrieve the closest retail locations of our products:
    <form method=post>
        <input type=text value="New York"> <!-- Compliant -->
    </form>
</body>
</html>

See

javascript:S5725

Using remote artifacts without integrity checks can lead to the unexpected execution of malicious code in the application.

On the client side, where front-end code is executed, malicious code could:

  • impersonate users' identities and take advantage of their privileges on the application.
  • add quiet malware that monitors users' session and capture sensitive secrets.
  • gain access to sensitive clients' personal data.
  • deface, or otherwise affect the general availability of the application.
  • mine cryptocurrencies in the background.

Likewise, a compromised software piece that would be deployed on a server-side application could badly affect the application’s security. For example, server-side malware could:

  • access and modify sensitive technical and business data.
  • elevate its privileges on the underlying operating system.
  • Use the compromised application as a pivot to attack the local network.

By ensuring that a remote artifact is exactly what it is supposed to be before using it, the application is protected from unexpected changes applied to it before it is downloaded.
Especially, integrity checks will allow for identifying an artifact replaced by malware on the publication website or that was legitimately changed by its author, in a more benign scenario.

Important note: downloading an artifact over HTTPS only protects it while in transit from one host to another. It provides authenticity and integrity checks for the network stream only. It does not ensure the authenticity or security of the artifact itself.

Ask Yourself Whether

  • The artifact is a file intended to execute code.
  • The artifact is a file that is intended to configure or affect running code in some way.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

To check the integrity of a remote artifact, hash verification is the most reliable solution. It does ensure that the file has not been modified since the fingerprint was computed.

In this case, the artifact’s hash must:

  • Be computed with a secure hash algorithm such as SHA512, SHA384 or SHA256.
  • Be compared with a secure hash that was not downloaded from the same source.

To do so, the best option is to add the hash in the code explicitly, by following Mozilla’s official documentation on how to generate integrity strings.

Note: Use this fix together with version binding on the remote file. Avoid downloading files named "latest" or similar, so that the front-end pages do not break when the code of the latest remote artifact changes.

Sensitive Code Example

The following code sample uses neither integrity checks nor version pinning:

let script = document.createElement("script");
script.src = "https://cdn.example.com/latest/script.js"; // Sensitive
script.crossOrigin = "anonymous";
document.head.appendChild(script);

Compliant Solution

let script = document.createElement("script");
script.src = "https://cdn.example.com/v5.3.6/script.js";
script.integrity = "sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC";
script.crossOrigin = "anonymous";
document.head.appendChild(script);

See

javascript:S5728

Content security policy (CSP) (fetch directives) is a W3C standard which is used by a server to specify, via a http header, the origins from where the browser is allowed to load resources. It can help to mitigate the risk of cross site scripting (XSS) attacks and reduce privileges used by an application. If the website doesn’t define CSP header the browser will apply same-origin policy by default.

Content-Security-Policy: default-src 'self'; script-src ‘self ‘ http://www.example.com

In the above example, all resources are allowed from the website where this header is set and script resources fetched from example.com are also authorized:

<img src="selfhostedimage.png></script> <!-- will be loaded because default-src 'self'; directive is applied  -->
<img src="http://www.example.com/image.png></script>  <!-- will NOT be loaded because default-src 'self'; directive is applied  -->
<script src="http://www.example.com/library.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->
<script src="selfhostedscript.js></script> <!-- will be loaded because script-src ‘self ‘ http://www.example.com directive is applied  -->
<script src="http://www.otherexample.com/library.js></script> <!-- will NOT be loaded because script-src ‘self ‘ http://www.example.comdirective is applied  -->

Ask Yourself Whether

  • The resources of the application are fetched from various untrusted locations.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Implement content security policy fetch directives, in particular default-src directive and continue to properly sanitize and validate all inputs of the application, indeed CSP fetch directives is only a tool to reduce the impact of cross site scripting attacks.

Sensitive Code Example

In a Express.js application, the code is sensitive if the helmet contentSecurityPolicy middleware is disabled:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(
    helmet({
      contentSecurityPolicy: false, // sensitive
    })
);

Compliant Solution

In a Express.js application, a standard way to implement CSP is the helmet contentSecurityPolicy middleware:

const express = require('express');
const helmet = require('helmet');

let app = express();
app.use(helmet.contentSecurityPolicy()); // Compliant

See

javascript:S5542

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

For AES, the weakest mode is ECB (Electronic Codebook). Repeated blocks of data are encrypted to the same value, making them easy to identify and reducing the difficulty of recovering the original cleartext.

Unauthenticated modes such as CBC (Cipher Block Chaining) may be used but are prone to attacks that manipulate the ciphertext. They must be used with caution.

For RSA, the weakest algorithms are either using it without padding or using the PKCS1v1.5 padding scheme.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate possible impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

Noncompliant code example

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-128-CBC", key, iv); // Noncompliant

Compliant solution

Example with a symmetric cipher, AES:

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

Appropriate choices are currently the following.

For AES: use authenticated encryption modes

The best-known authenticated encryption mode for AES is Galois/Counter mode (GCM).

GCM mode combines encryption with authentication and integrity checks using a cryptographic hash function and provides both confidentiality and authenticity of data.

Other similar modes are:

  • CCM: Counter with CBC-MAC
  • CWC: Cipher Block Chaining with Message Authentication Code
  • EAX: Encrypt-and-Authenticate
  • IAPM: Integer Authenticated Parallelizable Mode
  • OCB: Offset Codebook Mode

It is also possible to use AES-CBC with HMAC for integrity checks. However, it is considered more straightforward to use AES-GCM directly instead.

For RSA: use the OAEP scheme

The Optimal Asymmetric Encryption Padding scheme (OAEP) adds randomness and a secure hash function that strengthens the regular inner workings of RSA.

Resources

Articles & blog posts

Standards

javascript:S5547

This vulnerability makes it possible that the cleartext of the encrypted message might be recoverable without prior knowledge of the key.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communication in various domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

What is the potential impact?

The cleartext of an encrypted message might be recoverable. Additionally, it might be possible to modify the cleartext of an encrypted message.

Below are some real-world scenarios that illustrate some impacts of an attacker exploiting the vulnerability.

Theft of sensitive data

The encrypted message might contain data that is considered sensitive and should not be known to third parties.

By using a weak algorithm the likelihood that an attacker might be able to recover the cleartext drastically increases.

Additional attack surface

By modifying the cleartext of the encrypted message it might be possible for an attacker to trigger other vulnerabilities in the code. Encrypted values are often considered trusted, since under normal circumstances it would not be possible for a third party to modify them.

How to fix it in Node.js

Code examples

The following code contains examples of algorithms that are not considered highly resistant to cryptanalysis and thus should be avoided.

Noncompliant code example

const crypto = require('crypto');

crypto.createCipheriv("DES", key, iv); // Noncompliant

Compliant solution

const crypto = require('crypto');

crypto.createCipheriv("AES-256-GCM", key, iv);

How does this work?

Use a secure algorithm

It is highly recommended to use an algorithm that is currently considered secure by the cryptographic community. A common choice for such an algorithm is the Advanced Encryption Standard (AES).

For block ciphers, it is not recommended to use algorithms with a block size that is smaller than 128 bits.

Resources

Standards

javascript:S5659

This vulnerability allows forging of JSON Web Tokens to impersonate other users.

Why is this an issue?

JSON Web Tokens (JWTs), a popular method of securely transmitting information between parties as a JSON object, can become a significant security risk when they are not properly signed with a robust cipher algorithm, left unsigned altogether, or if the signature is not verified. This vulnerability class allows malicious actors to craft fraudulent tokens, effectively impersonating user identities. In essence, the integrity of a JWT hinges on the strength and presence of its signature.

What is the potential impact?

When a JSON Web Token is not appropriately signed with a strong cipher algorithm or if the signature is not verified, it becomes a significant threat to data security and the privacy of user identities.

Impersonation of users

JWTs are commonly used to represent user authorization claims. They contain information about the user’s identity, user roles, and access rights. When these tokens are not securely signed, it allows an attacker to forge them. In essence, a weak or missing signature gives an attacker the power to craft a token that could impersonate any user. For instance, they could create a token for an administrator account, gaining access to high-level permissions and sensitive data.

Unauthorized data access

When a JWT is not securely signed, it can be tampered with by an attacker, and the integrity of the data it carries cannot be trusted. An attacker can manipulate the content of the token and grant themselves permissions they should not have, leading to unauthorized data access.

How to fix it in jsonwebtoken

Code examples

The following code contains examples of JWT encoding and decoding without a strong cipher algorithm.

Noncompliant code example

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'none' }); // Noncompliant
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['none'] // Noncompliant
}, callbackcheck);

Compliant solution

const jwt = require('jsonwebtoken');

jwt.sign(payload, key, { algorithm: 'HS256' });
const jwt = require('jsonwebtoken');

jwt.verify(token, key, {
    expiresIn: 360000,
    algorithms: ['HS256']
}, callbackcheck);

How does this work?

Always sign your tokens

The foremost measure to enhance JWT security is to ensure that every JWT you issue is signed. Unsigned tokens are like open books that anyone can tamper with. Signing your JWTs ensures that any alterations to the tokens after they have been issued can be detected. Most JWT libraries support a signing function, and using it is usually as simple as providing a secret key when the token is created.

Choose a strong cipher algorithm

It is not enough to merely sign your tokens. You need to sign them with a strong cipher algorithm. Algorithms like HS256 (HMAC using SHA-256) are considered secure for most purposes. But for an additional layer of security, you could use an algorithm like RS256 (RSA Signature with SHA-256), which uses a private key for signing and a public key for verification. This way, even if someone gains access to the public key, they will not be able to forge tokens.

Verify the signature of your tokens

Resolving a vulnerability concerning the validation of JWT token signatures is mainly about incorporating a critical step into your process: validating the signature every time a token is decoded. Just having a signed token using a secure algorithm is not enough. If you are not validating signatures, they are not serving their purpose.

Every time your application receives a JWT, it needs to decode the token to extract the information contained within. It is during this decoding process that the signature of the JWT should also be checked.

To resolve the issue, follow these instructions:

  1. Use framework-specific functions for signature verification: Most programming frameworks that support JWTs provide specific functions to not only decode a token but also validate its signature simultaneously. Make sure to use these functions when handling incoming tokens.
  2. Handle invalid signatures appropriately: If a JWT’s signature does not validate correctly, it means the token is not trustworthy, indicating potential tampering. The action to take when encountering an invalid token should be denying the request carrying it and logging the event for further investigation.
  3. Incorporate signature validation in your tests: When you are writing tests for your application, include tests that check the signature validation functionality. This can help you catch any instances where signature verification might be unintentionally skipped or bypassed.

By following these practices, you can ensure the security of your application’s JWT handling process, making it resistant to attacks that rely on tampering with tokens. Validation of the signature needs to be an integral and non-negotiable part of your token handling process.

Going the extra mile

Securely store your secret keys

Ensure that your secret keys are stored securely. They should not be hard-coded into your application code or checked into your version control system. Instead, consider using environment variables, secure key management systems, or vault services.

Rotate your secret keys

Even with the strongest cipher algorithms, there is a risk that your secret keys may be compromised. Therefore, it is a good practice to periodically rotate your secret keys. By doing so, you limit the amount of time that an attacker can misuse a stolen key. When you rotate keys, be sure to allow a grace period where tokens signed with the old key are still accepted to prevent service disruptions.

Resources

Standards

javascript:S2245

Using pseudorandom number generators (PRNGs) is security-sensitive. For example, it has led in the past to the following vulnerabilities:

When software generates predictable values in a context requiring unpredictability, it may be possible for an attacker to guess the next value that will be generated, and use this guess to impersonate another user or access sensitive information.

As the Math.random() function relies on a weak pseudorandom number generator, this function should not be used for security-critical applications or for protecting sensitive data. In such context, a cryptographically strong pseudorandom number generator (CSPRNG) should be used instead.

Ask Yourself Whether

  • the code using the generated value requires it to be unpredictable. It is the case for all encryption mechanisms or when a secret value, such as a password, is hashed.
  • the function you use generates a value which can be predicted (pseudo-random).
  • the generated value is used multiple times.
  • an attacker can access the generated value.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Use a cryptographically strong pseudorandom number generator (CSPRNG) like crypto.getRandomValues().
  • Use the generated random values only once.
  • You should not expose the generated random value. If you have to store it, make sure that the database or file is secure.

Sensitive Code Example

const val = Math.random(); // Sensitive
// Check if val is used in a security context.

Compliant Solution

// === Client side ===
const crypto = window.crypto || window.msCrypto;
var array = new Uint32Array(1);
crypto.getRandomValues(array); // Compliant for security-sensitive use cases

// === Server side ===
const crypto = require('crypto');
const buf = crypto.randomBytes(1); // Compliant for security-sensitive use cases

See

javascript:S4423

This vulnerability exposes encrypted data to a number of attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

For these reasons, as soon as cryptography is included in a project, it is important to choose encryption algorithms that are considered strong and secure by the cryptography community.

To provide communication security over a network, SSL and TLS are generally used. However, it is important to note that the following protocols are all considered weak by the cryptographic community, and are officially deprecated:

  • SSL versions 1.0, 2.0 and 3.0
  • TLS versions 1.0 and 1.1

When these unsecured protocols are used, it is best practice to expect a breach: that a user or organization with malicious intent will perform mathematical attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

Noncompliant code example

NodeJs offers multiple ways to set weak TLS protocols. For https and tls, these options are used and are used in other third-party libraries as well.

The first is secureProtocol:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
 secureProtocol: 'TLSv1_method' // Noncompliant
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

The second is the combination of minVersion and maxVerison. Note that they cannot be specified along with the secureProtocol option:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.1',  // Noncompliant
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

And secureOptions, which in this example instructs the OpenSSL protocol to turn off some algorithms altogether. In general, this option might trigger side effects and should be used carefully, if used at all.

const https     = require('node:https');
const tls       = require('node:tls');
const constants = require('node:crypto'):

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
}; // Noncompliant

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Compliant solution

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });
const https = require('node:https');
const tls   = require('node:tls');

let options = {
  minVersion: 'TLSv1.2',
  maxVersion: 'TLSv1.2'
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

Here, the goal is to turn on only TLSv1.2 and higher, by turning off all lower versions:

const https = require('node:https');
const tls   = require('node:tls');

let options = {
  secureOptions:
    constants.SSL_OP_NO_SSLv2
    | constants.SSL_OP_NO_SSLv3
    | constants.SSL_OP_NO_TLSv1
    | constants.SSL_OP_NO_TLSv1_1
};

let req    = https.request(options, (res) => { });
let socket = tls.connect(443, "www.example.com", options, () => { });

How does this work?

As a rule of thumb, by default you should use the cryptographic algorithms and mechanisms that are considered strong by the cryptographic community.

The best choices at the moment are the following.

Use TLS v1.2 or TLS v1.3

Even though TLS V1.3 is available, using TLS v1.2 is still considered good and secure practice by the cryptography community.

The use of TLS v1.2 ensures compatibility with a wide range of platforms and enables seamless communication between different systems that do not yet have TLS v1.3 support.

The only drawback depends on whether the framework used is outdated: its TLS v1.2 settings may enable older and insecure cipher suites that are deprecated as insecure.

On the other hand, TLS v1.3 removes support for older and weaker cryptographic algorithms, eliminates known vulnerabilities from previous TLS versions, and improves performance.

Resources

Articles & blog posts

Standards

javascript:S4426

This vulnerability exposes encrypted data to attacks whose goal is to recover the plaintext.

Why is this an issue?

Encryption algorithms are essential for protecting sensitive information and ensuring secure communications in a variety of domains. They are used for several important reasons:

  • Confidentiality, privacy, and intellectual property protection
  • Security during transmission or on storage devices
  • Data integrity, general trust, and authentication

When selecting encryption algorithms, tools, or combinations, you should also consider two things:

  1. No encryption is unbreakable.
  2. The strength of an encryption algorithm is usually measured by the effort required to crack it within a reasonable time frame.

In today’s cryptography, the length of the key directly affects the security level of cryptographic algorithms.

Note that depending on the algorithm, the term key refers to a different mathematical property. For example:

  • For RSA, the key is the product of two large prime numbers, also called the modulus.
  • For AES and Elliptic Curve Cryptography (ECC), the key is only a sequence of randomly generated bytes.
    • In some cases, AES keys are derived from a master key or a passphrase using a Key Derivation Function (KDF) like PBKDF2 (Password-Based Key Derivation Function 2)

If an application uses a key that is considered short and insecure, the encrypted data is exposed to attacks aimed at getting at the plaintext.

In general, it is best practice to expect a breach: that a user or organization with malicious intent will perform cryptographic attacks on this data after obtaining it by other means.

What is the potential impact?

After retrieving encrypted data and performing cryptographic attacks on it on a given timeframe, attackers can recover the plaintext that encryption was supposed to protect.

Depending on the recovered data, the impact may vary.

Below are some real-world scenarios that illustrate the potential impact of an attacker exploiting the vulnerability.

Additional attack surface

By modifying the plaintext of the encrypted message, an attacker may be able to trigger additional vulnerabilities in the code. An attacker can further exploit a system to obtain more information.
Encrypted values are often considered trustworthy because it would not be possible for a third party to modify them under normal circumstances.

Breach of confidentiality and privacy

When encrypted data contains personal or sensitive information, its retrieval by an attacker can lead to privacy violations, identity theft, financial loss, reputational damage, or unauthorized access to confidential systems.

In this scenario, the company, its employees, users, and partners could be seriously affected.

The impact is twofold, as data breaches and exposure of encrypted data can undermine trust in the organization, as customers, clients and stakeholders may lose confidence in the organization’s ability to protect their sensitive data.

Legal and compliance issues

In many industries and locations, there are legal and compliance requirements to protect sensitive data. If encrypted data is compromised and the plaintext can be recovered, companies face legal consequences, penalties, or violations of privacy laws.

How to fix it in Node.js

Code examples

The following code examples either explicitly or implicitly generate keys. Note that there are differences in the size of the keys depending on the algorithm.

Due to the mathematical properties of the algorithms, the security requirements for the key size vary depending on the algorithm.
For example, a 256-bit ECC key provides about the same level of security as a 3072-bit RSA key and a 128-bit symmetric key.

Noncompliant code example

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 1024,  // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp112r2', // Noncompliant
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Compliant solution

Here is an example of a private key generation with RSA:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of a key generation with the Digital Signature Algorithm (DSA):

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPairSync('dsa', {
    modulusLength: 2048,
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

Here is an example of an Elliptic Curve (EC) initialization. It implicitly generates a private key whose size is indicated in the elliptic curve name:

const crypto = require('crypto');

function callback(err, pub, priv) {}

var { privateKey, publicKey } = crypto.generateKeyPair('ec', {
    namedCurve: 'secp224k1',
    publicKeyEncoding:  { type: 'spki', format: 'pem' },
    privateKeyEncoding: { type: 'pkcs8', format: 'pem' }
  },
  callback);

How does this work?

As a rule of thumb, use the cryptographic algorithms and mechanisms that are considered strong by the cryptography community.

The appropriate choices are the following.

RSA (Rivest-Shamir-Adleman) and DSA (Digital Signature Algorithm)

The security of these algorithms depends on the difficulty of attacks attempting to solve their underlying mathematical problem.

In general, a minimum key size of 2048 bits is recommended for both. It provides 112 bits of security. A key length of 3072 or 4092 should be preferred when possible.

AES (Advanced Encryption Standard)

AES supports three key sizes: 128 bits, 192 bits and 256 bits. The security of the AES algorithm is based on the computational complexity of trying all possible keys.
A larger key size increases the number of possible keys and makes exhaustive search attacks computationally infeasible. Therefore, a 256-bit key provides a higher level of security than a 128-bit or 192-bit key.

Currently, a minimum key size of 128 bits is recommended for AES.

Elliptic Curve Cryptography (ECC)

Elliptic curve cryptography is also used in various algorithms, such as ECDSA, ECDH, or ECMQV. The length of keys generated with elliptic curve algorithms is mentioned directly in their names. For example, secp256k1 generates a 256-bits long private key.

Currently, a minimum key size of 224 bits is recommended for EC-based algorithms.

Additionally, some curves that theoretically provide sufficiently long keys are still discouraged. This can be because of a flaw in the curve parameters, a bad overall design, or poor performance. It is generally advised to use a NIST-approved elliptic curve wherever possible. Such curves currently include:

  • NIST P curves with a size of at least 224 bits, e.g. secp256r1.
  • Curve25519, generally known as ed25519 or x25519 depending on its application.
  • Curve448.
  • Brainpool curves with a size of at least 224 bits, e.g. brainpoolP224r1

Going the extra mile

Pre-Quantum Cryptography

Encrypted data and communications recorded today could be decrypted in the future by an attack from a quantum computer.
It is important to keep in mind that NIST-approved digital signature schemes, key agreement, and key transport may need to be replaced with secure quantum-resistant (or "post-quantum") counterpart.

Thus, if data is to remain secure beyond 2030, proactive measures should be taken now to ensure its safety.

Learn more here.

Resources

Articles & blog posts

Standards

javascript:S4787

This rule is deprecated; use S4426, S5542, S5547 instead.

Encrypting data is security-sensitive. It has led in the past to the following vulnerabilities:

Proper encryption requires both the encryption algorithm and the key to be strong. Obviously the private key needs to remain secret and be renewed regularly. However these are not the only means to defeat or weaken an encryption.

This rule flags function calls that initiate encryption/decryption.

Ask Yourself Whether

  • the private key might not be random, strong enough or the same key is reused for a long long time.
  • the private key might be compromised. It can happen when it is stored in an unsafe place or when it was transferred in an unsafe manner.
  • the key exchange is made without properly authenticating the receiver.
  • the encryption algorithm is not strong enough for the level of protection required. Note that encryption algorithms strength decreases as time passes.
  • the chosen encryption library is deemed unsafe.
  • a nonce is used, and the same value is reused multiple times, or the nonce is not random.
  • the RSA algorithm is used, and it does not incorporate an Optimal Asymmetric Encryption Padding (OAEP), which might weaken the encryption.
  • the CBC (Cypher Block Chaining) algorithm is used for encryption, and it’s IV (Initialization Vector) is not generated using a secure random algorithm, or it is reused.
  • the Advanced Encryption Standard (AES) encryption algorithm is used with an unsecure mode. See the recommended practices for more information.

You are at risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Generate encryption keys using secure random algorithms.
  • When generating cryptographic keys (or key pairs), it is important to use a key length that provides enough entropy against brute-force attacks. For the Blowfish algorithm the key should be at least 128 bits long, while for the RSA algorithm it should be at least 2048 bits long.
  • Regenerate the keys regularly.
  • Always store the keys in a safe location and transfer them only over safe channels.
  • If there is an exchange of cryptographic keys, check first the identity of the receiver.
  • Only use strong encryption algorithms. Check regularly that the algorithm is still deemed secure. It is also imperative that they are implemented correctly. Use only encryption libraries which are deemed secure. Do not define your own encryption algorithms as they will most probably have flaws.
  • When a nonce is used, generate it randomly every time.
  • When using the RSA algorithm, incorporate an Optimal Asymmetric Encryption Padding (OAEP).
  • When CBC is used for encryption, the IV must be random and unpredictable. Otherwise it exposes the encrypted value to crypto-analysis attacks like "Chosen-Plaintext Attacks". Thus a secure random algorithm should be used. An IV value should be associated to one and only one encryption cycle, because the IV’s purpose is to ensure that the same plaintext encrypted twice will yield two different ciphertexts.
  • The Advanced Encryption Standard (AES) encryption algorithm can be used with various modes. Galois/Counter Mode (GCM) with no padding should be preferred to the following combinations which are not secured:
    • Electronic Codebook (ECB) mode: Under a given key, any given plaintext block always gets encrypted to the same ciphertext block. Thus, it does not hide data patterns well. In some senses, it doesn’t provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all.
    • Cipher Block Chaining (CBC) with PKCS#5 padding (or PKCS#7) is susceptible to padding oracle attacks.

Sensitive Code Example

// === Client side ===
crypto.subtle.encrypt(algo, key, plainData); // Sensitive
crypto.subtle.decrypt(algo, key, encData); // Sensitive
// === Server side ===
const crypto = require("crypto");
const cipher = crypto.createCipher(algo, key); // Sensitive
const cipheriv = crypto.createCipheriv(algo, key, iv); // Sensitive
const decipher = crypto.createDecipher(algo, key); // Sensitive
const decipheriv = crypto.createDecipheriv(algo, key, iv); // Sensitive
const pubEnc = crypto.publicEncrypt(key, buf); // Sensitive
const privDec = crypto.privateDecrypt({ key: key, passphrase: secret }, pubEnc); // Sensitive
const privEnc = crypto.privateEncrypt({ key: key, passphrase: secret }, buf); // Sensitive
const pubDec = crypto.publicDecrypt(key, privEnc); // Sensitive

See

javascript:S5876

An attacker may trick a user into using a predetermined session identifier. Consequently, this attacker can gain unauthorized access and impersonate the user’s session. This kind of attack is called session fixation, and protections against it should not be disabled.

Why is this an issue?

Session fixation attacks take advantage of the way web applications manage session identifiers. Here’s how a session fixation attack typically works:

  • When a user visits a website or logs in, a session is created for them.
  • This session is assigned a unique session identifier, stored in a cookie, in local storage, or through URL parameters.
  • In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. For example, the attacker sends the victim an email containing a link with this predetermined session identifier.
  • When the victim clicks on the link, the web application does not create a new session identifier but uses this identifier known to the attacker.
  • At this point, the attacker can hijack and impersonate the victim’s session.

What is the potential impact?

Session fixation attacks pose a significant security risk to web applications and their users. By exploiting this vulnerability, attackers can gain unauthorized access to user sessions, potentially leading to various malicious activities. Some of the most relevant scenarios are the following:

Impersonation

Once an attacker successfully fixes a session identifier, they can impersonate the victim and gain access to their account without providing valid credentials. This can result in unauthorized actions, such as modifying personal information, making unauthorized transactions, or even performing malicious activities on behalf of the victim. An attacker can also manipulate the victim into performing actions they wouldn’t normally do, such as revealing sensitive information or conducting financial transactions on the attacker’s behalf.

Data Breach

If an attacker gains access to a user’s session, they may also gain access to sensitive data associated with that session. This can include personal information, financial details, or any other confidential data that the user has access to within the application. The compromised data can be used for identity theft, financial fraud, or other malicious purposes.

Privilege Escalation

In some cases, session fixation attacks can be used to escalate privileges within a web application. By fixing a session identifier with higher privileges, an attacker can bypass access controls and gain administrative or privileged access to the application. This can lead to unauthorized modifications, data manipulation, or even complete compromise of the application and its underlying systems.

How to fix it in Passport

Code examples

Upon user authentication, it is crucial to regenerate the session identifier to prevent fixation attacks. Passport provides a mechanism to achieve this by using the req.session.regenerate() method. By calling this method after successful authentication, you can ensure that each user is assigned a new and unique session ID.

Noncompliant code example

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    // Noncompliant - no session.regenerate after login
    res.redirect('/');
  });

Compliant solution

app.post('/login',
  passport.authenticate('local', { failureRedirect: '/login' }),
  function(req, res) {
    let prevSession = req.session;
    req.session.regenerate((err) => {
      Object.assign(req.session, prevSession);
      res.redirect('/');
    });
  });

How does this work?

The protection works by ensuring that the session identifier, which is used to identify and track a user’s session, is changed or regenerated during the authentication process.

Here’s how session fixation protection typically works:

  1. When a user visits a website or logs in, a session is created for them. This session is assigned a unique session identifier, which is stored in a cookie or passed through URL parameters.
  2. In a session fixation attack, an attacker tricks a user into using a predetermined session identifier controlled by the attacker. This allows the attacker to potentially gain unauthorized access to the user’s session.
  3. To protect against session fixation attacks, session fixation protection mechanisms come into play during the authentication process. When a user successfully authenticates, this mechanism generates a new session identifier for the user’s session.
  4. The old session identifier, which may have been manipulated by the attacker, is invalidated and no longer associated with the user’s session. This ensures that any attempts by the attacker to use the fixed session identifier are rendered ineffective.
  5. The user is then assigned the new session identifier, which is used for subsequent requests and session tracking. This new session identifier is typically stored in a new session cookie or passed through URL parameters.

By regenerating the session identifier upon authentication, session fixation protection helps ensure that the user’s session is tied to a new, secure identifier that the attacker cannot predict or control. This mitigates the risk of an attacker gaining unauthorized access to the user’s session and helps maintain the integrity and security of the application’s session management process.

Resources

Documentation

Articles & blog posts

Standards

javascript:S3330

When a cookie is configured with the HttpOnly attribute set to true, the browser guaranties that no client-side script will be able to read it. In most cases, when a cookie is created, the default value of HttpOnly is false and it’s up to the developer to decide whether or not the content of the cookie can be read by the client-side script. As a majority of Cross-Site Scripting (XSS) attacks target the theft of session-cookies, the HttpOnly attribute can help to reduce their impact as it won’t be possible to exploit the XSS vulnerability to steal session-cookies.

Ask Yourself Whether

  • the cookie is sensitive, used to authenticate the user, for instance a session-cookie
  • the HttpOnly attribute offer an additional protection (not the case for an XSRF-TOKEN cookie / CSRF token for example)

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • By default the HttpOnly flag should be set to true for most of the cookies and it’s mandatory for session / sensitive-security cookies.

Sensitive Code Example

cookie-session module:

let session = cookieSession({
  httpOnly: false,// Sensitive
});  // Sensitive

express-session module:

const express = require('express'),
const session = require('express-session'),

let app = express()
app.use(session({
  cookie:
  {
    httpOnly: false // Sensitive
  }
})),

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: false // Sensitive
}); // Sensitive

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: false }}); // Sensitive

Compliant Solution

cookie-session module:

let session = cookieSession({
  httpOnly: true,// Compliant
});  // Compliant

express-session module:

const express = require('express');
const session = require('express-session');

let app = express();
app.use(session({
  cookie:
  {
    httpOnly: true // Compliant
  }
}));

cookies module:

let cookies = new Cookies(req, res, { keys: keys });

cookies.set('LastVisit', new Date().toISOString(), {
  httpOnly: true // Compliant
}); // Compliant

csurf module:

const cookieParser = require('cookie-parser');
const csrf = require('csurf');
const express = require('express');

let csrfProtection = csrf({ cookie: { httpOnly: true }}); // Compliant

See

javascript:S4784

This rule is deprecated; use S5852 instead.

Using regular expressions is security-sensitive. It has led in the past to the following vulnerabilities:

Evaluating regular expressions against input strings is potentially an extremely CPU-intensive task. Specially crafted regular expressions such as (a+)+s will take several seconds to evaluate the input string aaaaaaaaaaaaaaaaaaaaaaaaaaaaabs. The problem is that with every additional a character added to the input, the time required to evaluate the regex doubles. However, the equivalent regular expression, a+s (without grouping) is efficiently evaluated in milliseconds and scales linearly with the input size.

Evaluating such regular expressions opens the door to Regular expression Denial of Service (ReDoS) attacks. In the context of a web application, attackers can force the web server to spend all of its resources evaluating regular expressions thereby making the service inaccessible to genuine users.

This rule flags any execution of a hardcoded regular expression which has at least 3 characters and at least two instances of any of the following characters: *+{.

Example: (a+)*

Ask Yourself Whether

  • the executed regular expression is sensitive and a user can provide a string which will be analyzed by this regular expression.
  • your regular expression engine performance decrease with specially crafted inputs and regular expressions.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Check whether your regular expression engine (the algorithm executing your regular expression) has any known vulnerabilities. Search for vulnerability reports mentioning the one engine you’re are using.

Use if possible a library which is not vulnerable to Redos Attacks such as Google Re2.

Remember also that a ReDos attack is possible if a user-provided regular expression is executed. This rule won’t detect this kind of injection.

Sensitive Code Example

const regex = /(a+)+b/; // Sensitive
const regex2 = new RegExp("(a+)+b"); // Sensitive

str.search("(a+)+b"); // Sensitive
str.match("(a+)+b"); // Sensitive
str.split("(a+)+b"); // Sensitive

Note: String.matchAll does not raise any issue as it is not supported by NodeJS.

Exceptions

Some corner-case regular expressions will not raise an issue even though they might be vulnerable. For example: (a|aa)+, (a|a?)+.

It is a good idea to test your regular expression if it has the same pattern on both side of a "|".

See

javascript:S5757

Log management is an important topic, especially for the security of a web application, to ensure user activity, including potential attackers, is recorded and available for an analyst to understand what’s happened on the web application in case of malicious activities.

Retention of specific logs for a defined period of time is often necessary to comply with regulations such as GDPR, PCI DSS and others. However, to protect user’s privacy, certain informations are forbidden or strongly discouraged from being logged, such as user passwords or credit card numbers, which obviously should not be stored or at least not in clear text.

Ask Yourself Whether

In a production environment:

  • The web application uses confidential information and logs a significant amount of data.
  • Logs are externalized to SIEM or Big Data repositories.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Loggers should be configured with a list of confidential, personal information that will be hidden/masked or removed from logs.

Sensitive Code Example

With Signale log management framework the code is sensitive when an empty list of secrets is defined:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: []         // empty list of secrets
};

const logger = new Signale(options); // Sensitive

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

Compliant Solution

With Signale log management framework it is possible to define a list of secrets that will be hidden in logs:

const { Signale } = require('signale');

const CREDIT_CARD_NUMBERS = fetchFromWebForm()
// here we suppose the credit card numbers are retrieved somewhere and CREDIT_CARD_NUMBERS looks like ["1234-5678-0000-9999", "1234-5678-0000-8888"]; for instance

const options = {
  secrets: ["([0-9]{4}-?)+"]
};

const logger = new Signale(options); // Compliant

CREDIT_CARD_NUMBERS.forEach(function(CREDIT_CARD_NUMBER) {
  logger.log('The customer ordered products with the credit card number = %s', CREDIT_CARD_NUMBER);
});

See

javascript:S2255

This rule is deprecated, and will eventually be removed.

Using cookies is security-sensitive. It has led in the past to the following vulnerabilities:

Attackers can use widely-available tools to read cookies. Any sensitive information they may contain will be exposed.

This rule flags code that writes cookies.

Ask Yourself Whether

  • sensitive information is stored inside the cookie.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Cookies should only be used to manage the user session. The best practice is to keep all user-related information server-side and link them to the user session, never sending them to the client. In a very few corner cases, cookies can be used for non-sensitive information that need to live longer than the user session.

Do not try to encode sensitive information in a non human-readable format before writing them in a cookie. The encoding can be reverted and the original information will be exposed.

Using cookies only for session IDs doesn’t make them secure. Follow OWASP best practices when you configure your cookies.

As a side note, every information read from a cookie should be Sanitized.

Sensitive Code Example

// === Built-in NodeJS modules ===
const http = require('http');
const https = require('https');

http.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
https.createServer(function(req, res) {
  res.setHeader('Set-Cookie', ['type=ninja', 'lang=js']); // Sensitive
});
// === ExpressJS ===
const express = require('express');
const app = express();
app.use(function(req, res, next) {
  res.cookie('name', 'John'); // Sensitive
});
// === In browser ===
// Set cookie
document.cookie = "name=John"; // Sensitive

See

javascript:S5759

Users often connect to web servers through HTTP proxies.

Proxy can be configured to forward the client IP address via the X-Forwarded-For or Forwarded HTTP headers.

IP address is a personal information which can identify a single user and thus impact his privacy.

Ask Yourself Whether

  • The web application uses reverse proxies or similar but doesn’t need to know the IP address of the user.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

User IP address should not be forwarded unless the application needs it, as part of an authentication, authorization scheme or log management for examples.

Sensitive Code Example

node-http-proxy

var httpProxy = require('http-proxy');

httpProxy.createProxyServer({target:'http://localhost:9000', xfwd:true}) // Noncompliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true, xfwd: true })); // Noncompliant
app.listen(3000);

Compliant Solution

node-http-proxy

var httpProxy = require('http-proxy');

// By default xfwd option is false
httpProxy.createProxyServer({target:'http://localhost:9000'}) // Compliant
  .listen(8000);

http-proxy-middleware

var express = require('express');

const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

// By default xfwd option is false
app.use('/proxy', createProxyMiddleware({ target: 'http://localhost:9000', changeOrigin: true})); // Compliant
app.listen(3000);

See

javascript:S4790

Cryptographic hash algorithms such as MD2, MD4, MD5, MD6, HAVAL-128, HMAC-MD5, DSA (which uses SHA-1), RIPEMD, RIPEMD-128, RIPEMD-160, HMACRIPEMD160 and SHA-1 are no longer considered secure, because it is possible to have collisions (little computational effort is enough to find two or more different inputs that produce the same hash).

Ask Yourself Whether

The hashed value is used in a security context like:

  • User-password storage.
  • Security token generation (used to confirm e-mail when registering on a website, reset password, etc …​).
  • To compute some message integrity.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Safer alternatives, such as SHA-256, SHA-512, SHA-3 are recommended, and for password hashing, it’s even better to use algorithms that do not compute too "quickly", like bcrypt, scrypt, argon2 or pbkdf2 because it slows down brute force attacks.

Sensitive Code Example

const crypto = require("crypto");

const hash = crypto.createHash('sha1'); // Sensitive

Compliant Solution

const crypto = require("crypto");

const hash = crypto.createHash('sha512'); // Compliant

See

javascript:S5527

This vulnerability allows attackers to impersonate a trusted host.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. In this process, the role of hostname validation, combined with certificate validation, is to ensure that a system is indeed the one it claims to be, adding an extra layer of trust and security.

When hostname validation is disabled, the client skips this critical check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

To do so, an attacker would obtain a valid certificate authenticating example.com, serve it using a different hostname, and the application code would still accept it.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable hostname validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate hostnames, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading it as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

How to fix it in Node.js

Code examples

The following code contains examples of disabled hostname validation.

The hostname validation gets disabled by overriding checkServerIdentity with an empty implementation. It is highly recommended to use the original implementation.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  checkServerIdentity: function() {}, // Noncompliant
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
  secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

To fix the vulnerability of disabled hostname validation, it is strongly recommended to first re-enable the default validation and fix the root cause: the validity of the certificate.

Use valid certificates

If a hostname validation failure prevents connecting to the target server, keep in mind that one system’s code should not work around another system’s problems, as this creates unnecessary dependencies and can lead to reliability issues.

Therefore, the first solution is to change the remote host’s certificate to match its identity. If the remote host is not under your control, consider replicating its service to a server whose certificate you can change yourself.

In case the contacted host is located on a development machine, and if there is no other choice, try following this solution:

  • Create a self-signed certificate for that machine.
  • Add this self-signed certificate to the system’s trust store.
  • If the hostname is not localhost, add the hostname in the /etc/hosts file.

Resources

Standards

javascript:S2755

This vulnerability allows the usage of external entities in XML.

Why is this an issue?

External Entity Processing allows for XML parsing with the involvement of external entities. However, when this functionality is enabled without proper precautions, it can lead to a vulnerability known as XML External Entity (XXE) attack.

What is the potential impact?

Exposing sensitive data

One significant danger of XXE vulnerabilities is the potential for sensitive data exposure. By crafting malicious XML payloads, attackers can reference external entities that contain sensitive information, such as system files, database credentials, or configuration files. When these entities are processed during XML parsing, the attacker can extract the contents and gain unauthorized access to sensitive data. This poses a severe threat to the confidentiality of critical information.

Exhausting system resources

Another consequence of XXE vulnerabilities is the potential for denial-of-service attacks. By exploiting the ability to include external entities, attackers can construct XML payloads that cause resource exhaustion. This can overwhelm the system’s memory, CPU, or other critical resources, leading to system unresponsiveness or crashes. A successful DoS attack can disrupt the availability of services and negatively impact the user experience.

Forging requests

XXE vulnerabilities can also enable Server-Side Request Forgery (SSRF) attacks. By leveraging the ability to include external entities, an attacker can make the vulnerable application send arbitrary requests to other internal or external systems. This can result in unintended actions, such as retrieving data from internal resources, scanning internal networks, or attacking other systems. SSRF attacks can lead to severe consequences, including unauthorized data access, system compromise, or even further exploitation within the network infrastructure.

How to fix it in libxmljs

Code examples

The following code contains examples of XML parsers that have external entity processing enabled. As a result, the parsers are vulnerable to XXE attacks if an attacker can control the XML file that is processed.

Noncompliant code example

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml, {
    noblanks: true,
    noent: true, // Noncompliant
    nocdata: true
});

Compliant solution

parseXmlString is safe by default.

var libxmljs = require('libxmljs');
var fs = require('fs');

var xml = fs.readFileSync('xxe.xml', 'utf8');
libxmljs.parseXmlString(xml);

How does this work?

Disable external entities

The most effective approach to prevent XXE vulnerabilities is to disable external entity processing entirely, unless it is explicitly required for specific use cases. By default, XML parsers should be configured to reject the processing of external entities. This can be achieved by setting the appropriate properties or options in your XML parser library or framework.

If external entity processing is necessary for certain scenarios, adopt a whitelisting approach to restrict the entities that can be resolved during XML parsing. Create a list of trusted external entities and disallow all others. This approach ensures that only known and safe entities are processed.
You should rely on features provided by your XML parser to restrict the external entities.

Resources

Standards

javascript:S4817

This rule is deprecated, and will eventually be removed.

Executing XPATH expressions is security-sensitive. It has led in the past to the following vulnerabilities:

User-provided data such as URL parameters should always be considered as untrusted and tainted. Constructing XPath expressions directly from tainted data enables attackers to inject specially crafted values that changes the initial meaning of the expression itself. Successful XPath injections attacks can read sensitive information from the XML document.

Ask Yourself Whether

  • the XPATH expression might contain some unsafe input coming from a user.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize any user input before using it in an XPATH expression.

Sensitive Code Example

// === Server side ===

var xpath = require('xpath');
var xmldom = require('xmldom');

var doc = new xmldom.DOMParser().parseFromString(xml);
var nodes = xpath.select(userinput, doc); // Sensitive
var node = xpath.select1(userinput, doc); // Sensitive
// === Client side ===

// Chrome, Firefox, Edge, Opera, and Safari use the evaluate() method to select nodes:
var nodes = document.evaluate(userinput, xmlDoc, null, XPathResult.ANY_TYPE, null); // Sensitive

// Internet Explorer uses its own methods to select nodes:
var nodes = xmlDoc.selectNodes(userinput); // Sensitive
var node = xmlDoc.SelectSingleNode(userinput); // Sensitive

See

javascript:S4818

This rule is deprecated, and will eventually be removed.

Using sockets is security-sensitive. It has led in the past to the following vulnerabilities:

Sockets are vulnerable in multiple ways:

  • They enable a software to interact with the outside world. As this world is full of attackers it is necessary to check that they cannot receive sensitive information or inject dangerous input.
  • The number of sockets is limited and can be exhausted. Which makes the application unresponsive to users who need additional sockets.

This rules flags code that creates sockets. It matches only the direct use of sockets, not use through frameworks or high-level APIs such as the use of http connections.

Ask Yourself Whether

  • sockets are created without any limit every time a user performs an action.
  • input received from sockets is used without being sanitized.
  • sensitive data is sent via sockets without being encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • In many cases there is no need to open a socket yourself. Use instead libraries and existing protocols.
  • Encrypt all data sent if it is sensitive. Usually it is better to encrypt it even if the data is not sensitive as it might change later.
  • Sanitize any input read from the socket.
  • Limit the number of sockets a given user can create. Close the sockets as soon as possible.

Sensitive Code Example

const net = require('net');

var socket = new net.Socket(); // Sensitive
socket.connect(80, 'google.com');

// net.createConnection creates a new net.Socket, initiates connection with socket.connect(), then returns the net.Socket that starts the connection
net.createConnection({ port: port }, () => {}); // Sensitive

// net.connect is an alias to net.createConnection
net.connect({ port: port }, () => {}); // Sensitive

See

javascript:S1523

Executing code dynamically is security-sensitive. It has led in the past to the following vulnerabilities:

Some APIs enable the execution of dynamic code by providing it as strings at runtime. These APIs might be useful in some very specific meta-programming use-cases. However most of the time their use is frowned upon as they also increase the risk of Injected Code. Such attacks can either run on the server or in the client (exemple: XSS attack) and have a huge impact on an application’s security.

This rule raises issues on calls to eval and Function constructor. This rule does not detect code injections. It only highlights the use of APIs which should be used sparingly and very carefully. The goal is to guide security code reviews.

The rule also flags string literals starting with javascript: as the code passed in javascript: URLs is evaluated the same way as calls to eval or Function constructor.

Ask Yourself Whether

  • the executed code may come from an untrusted source and hasn’t been sanitized.
  • you really need to run code dynamically.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Regarding the execution of unknown code, the best solution is to not run code provided by an untrusted source. If you really need to do it, run the code in a sandboxed environment. Use jails, firewalls and whatever means your operating system and programming language provide (example: Security Managers in java, iframes and same-origin policy for javascript in a web browser).

Do not try to create a blacklist of dangerous code. It is impossible to cover all attacks that way.

Avoid using dynamic code APIs whenever possible. Hard-coded code is always safer.

Sensitive Code Example

let value = eval('obj.' + propName); // Sensitive
let func = Function('obj' + propName); // Sensitive
location.href = 'javascript:void(0)'; // Sensitive

Exceptions

This rule will not raise an issue when the argument of the eval or Function is a literal string as it is reasonably safe.

See

javascript:S1525

This rule is deprecated; use S4507 instead.

Why is this an issue?

The debugger statement can be placed anywhere in procedures to suspend execution. Using the debugger statement is similar to setting a breakpoint in the code. By definition such statement must absolutely be removed from the source code to prevent any unexpected behavior or added vulnerability to attacks in production.

Noncompliant code example

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
  // Wait for user to resume.
  debugger;
}

Compliant solution

for (i = 1; i<5; i++) {
  // Print i to the Output window.
  Debug.write("loop index is " + i);
}

Resources

javascript:S2612

In Unix file system permissions, the "others" category refers to all users except the owner of the file system resource and the members of the group assigned to this resource.

Granting permissions to this category can lead to unintended access to files or directories that could allow attackers to obtain sensitive information, disrupt services or elevate privileges.

Ask Yourself Whether

  • The application is designed to be run on a multi-user environment.
  • Corresponding files and directories may contain confidential information.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

The most restrictive possible permissions should be assigned to files and directories.

Sensitive Code Example

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o777); // Sensitive
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o777); // Sensitive
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o000); // Sensitive

Compliant Solution

Node.js fs

const fs = require('fs');

fs.chmodSync("/tmp/fs", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises;

fsPromises.chmod("/tmp/fsPromises", 0o770); // Compliant
const fs = require('fs');
const fsPromises = fs.promises

async function fileHandler() {
  let filehandle;
  try {
    filehandle = fsPromises.open('/tmp/fsPromises', 'r');
    filehandle.chmod(0o770); // Compliant
  } finally {
    if (filehandle !== undefined)
      filehandle.close();
  }
}

Node.js process.umask

const process = require('process');

process.umask(0o007); // Compliant

See

javascript:S4721

Arbitrary OS command injection vulnerabilities are more likely when a shell is spawned rather than a new process, indeed shell meta-chars can be used (when parameters are user-controlled for instance) to inject OS commands.

Ask Yourself Whether

  • OS command name or parameters are user-controlled.

There is a risk if you answered yes to this question.

Recommended Secure Coding Practices

Use functions that don’t spawn a shell.

Sensitive Code Example

const cp = require('child_process');

// A shell will be spawn in these following cases:
cp.exec(cmd); // Sensitive
cp.execSync(cmd); // Sensitive

cp.spawn(cmd, { shell: true }); // Sensitive
cp.spawnSync(cmd, { shell: true }); // Sensitive
cp.execFile(cmd, { shell: true }); // Sensitive
cp.execFileSync(cmd, { shell: true }); // Sensitive

Compliant Solution

const cp = require('child_process');

cp.spawnSync("/usr/bin/file.exe", { shell: false }); // Compliant

See

javascript:S1313

Hardcoding IP addresses is security-sensitive. It has led in the past to the following vulnerabilities:

Today’s services have an ever-changing architecture due to their scaling and redundancy needs. It is a mistake to think that a service will always have the same IP address. When it does change, the hardcoded IP will have to be modified too. This will have an impact on the product development, delivery, and deployment:

  • The developers will have to do a rapid fix every time this happens, instead of having an operation team change a configuration file.
  • It misleads to use the same address in every environment (dev, sys, qa, prod).

Last but not least it has an effect on application security. Attackers might be able to decompile the code and thereby discover a potentially sensitive address. They can perform a Denial of Service attack on the service, try to get access to the system, or try to spoof the IP address to bypass security checks. Such attacks can always be possible, but in the case of a hardcoded IP address solving the issue will take more time, which will increase an attack’s impact.

Ask Yourself Whether

The disclosed IP address is sensitive, e.g.:

  • Can give information to an attacker about the network topology.
  • It’s a personal (assigned to an identifiable person) IP address.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

Don’t hard-code the IP address in the source code, instead make it configurable with environment variables, configuration files, or a similar approach. Alternatively, if confidentially is not required a domain name can be used since it allows to change the destination quickly without having to rebuild the software.

Sensitive Code Example

ip = "192.168.12.42"; // Sensitive

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Compliant Solution

ip = process.env.IP_ADDRESS; // Compliant

const net = require('net');
var client = new net.Socket();
client.connect(80, ip, function() {
  // ...
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Loopback addresses 127.0.0.0/8 in CIDR notation (from 127.0.0.0 to 127.255.255.255)
  • Broadcast address 255.255.255.255
  • Non routable address 0.0.0.0
  • Strings of the form 2.5.<number>.<number> as they often match Object Identifiers (OID).
  • Addresses in the ranges 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24, reserved for documentation purposes by RFC 5737
  • Addresses in the 2001:db8::/32 range, reserved for documentation purposes by RFC 3849

See

javascript:S4823

This rule is deprecated, and will eventually be removed.

Using command line arguments is security-sensitive. It has led in the past to the following vulnerabilities:

Command line arguments can be dangerous just like any other user input. They should never be used without being first validated and sanitized.

Remember also that any user can retrieve the list of processes running on a system, which makes the arguments provided to them visible. Thus passing sensitive information via command line arguments should be considered as insecure.

This rule raises an issue when on every program entry points (main methods) when command line arguments are used. The goal is to guide security code reviews.

Ask Yourself Whether

  • any of the command line arguments are used without being sanitized first.
  • your application accepts sensitive information via command line arguments.

If you answered yes to any of these questions you are at risk.

Recommended Secure Coding Practices

Sanitize all command line arguments before using them.

Any user or application can list running processes and see the command line arguments they were started with. There are safer ways of providing sensitive information to an application than exposing them in the command line. It is common to write them on the process' standard input, or give the path to a file containing the information.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
var param = process.argv[2]; // Sensitive: check how the argument is used
console.log('Param: ' + param);

See

javascript:S4829

This rule is deprecated, and will eventually be removed.

Reading Standard Input is security-sensitive. It has led in the past to the following vulnerabilities:

It is common for attackers to craft inputs enabling them to exploit software vulnerabilities. Thus any data read from the standard input (stdin) can be dangerous and should be validated.

This rule flags code that reads from the standard input.

Ask Yourself Whether

  • data read from the standard input is not sanitized before being used.

You are at risk if you answered yes to this question.

Recommended Secure Coding Practices

Sanitize all data read from the standard input before using it.

Sensitive Code Example

// The process object is a global that provides information about, and control over, the current Node.js process
// All uses of process.stdin are security-sensitive and should be reviewed

process.stdin.on('readable', () => {
	const chunk = process.stdin.read(); // Sensitive
	if (chunk !== null) {
		dosomething(chunk);
	}
});

const readline = require('readline');
readline.createInterface({
	input: process.stdin // Sensitive
}).on('line', (input) => {
	dosomething(input);
});

See

javascript:S1442

This rule is deprecated; use S4507 instead.

Why is this an issue?

alert(...) as well as confirm(...) and prompt(...) can be useful for debugging during development, but in production mode this kind of pop-up could expose sensitive information to attackers, and should never be displayed.

Noncompliant code example

if(unexpectedCondition) {
  alert("Unexpected Condition");
}

Resources

javascript:S4830

This vulnerability makes it possible that an encrypted communication is intercepted.

Why is this an issue?

Transport Layer Security (TLS) provides secure communication between systems over the internet by encrypting the data sent between them. Certificate validation adds an extra layer of trust and security to this process to ensure that a system is indeed the one it claims to be.

When certificate validation is disabled, the client skips a critical security check. This creates an opportunity for attackers to pose as a trusted entity and intercept, manipulate, or steal the data being transmitted.

What is the potential impact?

Establishing trust in a secure way is a non-trivial task. When you disable certificate validation, you are removing a key mechanism designed to build this trust in internet communication, opening your system up to a number of potential threats.

Identity spoofing

If a system does not validate certificates, it cannot confirm the identity of the other party involved in the communication. An attacker can exploit this by creating a fake server and masquerading as a legitimate one. For example, they might set up a server that looks like your bank’s server, tricking your system into thinking it is communicating with the bank. This scenario, called identity spoofing, allows the attacker to collect any data your system sends to them, potentially leading to significant data breaches.

Loss of data integrity

When TLS certificate validation is disabled, the integrity of the data you send and receive cannot be guaranteed. An attacker could modify the data in transit, and you would have no way of knowing. This could range from subtle manipulations of the data you receive to the injection of malicious code or malware into your system. The consequences of such breaches of data integrity can be severe, depending on the nature of the data and the system.

How to fix it in Node.js

Code examples

The following code contains examples of disabled certificate validation.

The certificate validation gets disabled by setting rejectUnauthorized to false. To enable validation set the value to true or do not set rejectUnauthorized at all to use the secure default value.

Noncompliant code example

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  rejectUnauthorized: false,
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
}); // Noncompliant
const tls = require('node:tls');

let options = {
    rejectUnauthorized: false,
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
}); // Noncompliant

Compliant solution

const https = require('node:https');

let options = {
  hostname: 'www.example.com',
  port: 443,
  path: '/',
  method: 'GET',
  secureProtocol: 'TLSv1_2_method'
};

let req = https.request(options, (res) => {
  res.on('data', (d) => {
    process.stdout.write(d);
  });
});
const tls = require('node:tls');

let options = {
    secureProtocol: 'TLSv1_2_method'
};

let socket = tls.connect(443, "www.example.com", options, () => {
  process.stdin.pipe(socket);
  process.stdin.resume();
});

How does this work?

Addressing the vulnerability of disabled TLS certificate validation primarily involves re-enabling the default validation.

To avoid running into problems with invalid certificates, consider the following sections.

Using trusted certificates

If possible, always use a certificate issued by a well-known, trusted CA for your server. Most programming environments come with a predefined list of trusted root CAs, and certificates issued by these authorities are validated automatically. This is the best practice, and it requires no additional code or configuration.

Working with self-signed certificates or non-standard CAs

In some cases, you might need to work with a server using a self-signed certificate, or a certificate issued by a CA not included in your trusted roots. Rather than disabling certificate validation in your code, you can add the necessary certificates to your trust store.

Resources

Standards

javascript:S6265

Predefined permissions, also known as canned ACLs, are an easy way to grant large privileges to predefined groups or users.

The following canned ACLs are security-sensitive:

  • PUBLIC_READ, PUBLIC_READ_WRITE grant respectively "read" and "read and write" privileges to anyone, either authenticated or anonymous (AllUsers group).
  • AUTHENTICATED_READ grants "read" privilege to all authenticated users (AuthenticatedUsers group).

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege policy, i.e., to only grant users the necessary permissions for their required tasks. In the context of canned ACL, set it to PRIVATE (the default one), and if needed more granularity then use an appropriate S3 policy.

Sensitive Code Example

All users, either authenticated or anonymous, have read and write permissions with the PUBLIC_READ_WRITE access control:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PUBLIC_READ_WRITE // Sensitive
});

Compliant Solution

With the PRIVATE access control (default), only the bucket owner has the read/write permissions on the bucket and its ACL.

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'bucket', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

new s3deploy.BucketDeployment(this, 'DeployWebsite', {
    accessControl: s3.BucketAccessControl.PRIVATE
});

See

javascript:S6268

Angular prevents XSS vulnerabilities by treating all values as untrusted by default. Untrusted values are systematically sanitized by the framework before they are inserted into the DOM.

Still, developers have the ability to manually mark a value as trusted if they are sure that the value is already sanitized. Accidentally trusting malicious data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

  • The value for which sanitization has been disabled is user-controlled.
  • It’s difficult to understand how this value is constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid including dynamic executable code and thus disabling Angular’s built-in sanitization unless it’s absolutely necessary. Try instead to rely as much as possible on static templates and Angular built-in sanitization to define web page content.
  • Make sure to understand how the value to consider as trusted is constructed and never concatenate it with user-controlled data.
  • Make sure to choose the correct DomSanitizer "bypass" method based on the context. For instance, only use bypassSecurityTrustUrl to trust urls in an href attribute context.

Sensitive Code Example

import { Component, OnInit } from '@angular/core';
import { DomSanitizer, SafeHtml } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello" [innerHTML]="hello"></div>'
})
export class HelloComponent implements OnInit {
  hello: SafeHtml;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    let name = this.route.snapshot.queryParams.name;
    let html = "<h1>Hello " + name + "</h1>";
    this.hello = this.sanitizer.bypassSecurityTrustHtml(html); // Sensitive
  }
}

Compliant Solution

import { Component, OnInit } from '@angular/core';
import { DomSanitizer } from "@angular/platform-browser";
import { ActivatedRoute } from '@angular/router';

@Component({
  template: '<div id="hello"><h1>Hello {{name}}</h1></div>',
})
export class HelloComponent implements OnInit {
  name: string;

  constructor(private sanitizer: DomSanitizer, private route: ActivatedRoute) { }

  ngOnInit(): void {
    this.name = this.route.snapshot.queryParams.name;
  }
}

See

javascript:S5042

Successful Zip Bomb attacks occur when an application expands untrusted archive files without controlling the size of the expanded data, which can lead to denial of service. A Zip bomb is usually a malicious archive file of a few kilobytes of compressed data but turned into gigabytes of uncompressed data. To achieve this extreme compression ratio, attackers will compress irrelevant data (eg: a long string of repeated bytes).

Ask Yourself Whether

Archives to expand are untrusted and:

  • There is no validation of the number of entries in the archive.
  • There is no validation of the total size of the uncompressed data.
  • There is no validation of the ratio between the compressed and uncompressed archive entry.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Define and control the ratio between compressed and uncompressed data, in general the data compression ratio for most of the legit archives is 1 to 3.
  • Define and control the threshold for maximum total size of the uncompressed data.
  • Count the number of file entries extracted from the archive and abort the extraction if their number is greater than a predefined threshold, in particular it’s not recommended to recursively expand archives (an entry of an archive could be also an archive).

Sensitive Code Example

For tar module:

const tar = require('tar');

tar.x({ // Sensitive
  file: 'foo.tar.gz'
});

For adm-zip module:

const AdmZip = require('adm-zip');

let zip = new AdmZip("./foo.zip");
zip.extractAllTo("."); // Sensitive

For jszip module:

const fs = require("fs");
const JSZip = require("jszip");

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) { // Sensitive
    zip.forEach(function (relativePath, zipEntry) {
      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(zipEntry.name);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          fs.writeFileSync(zipEntry.name, content);
        });
      }
    });
  });
});

For yauzl module

const yauzl = require('yauzl');

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  zipfile.on("entry", function(entry) {
    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

For extract-zip module:

const extract = require('extract-zip')

async function main() {
  let target = __dirname + '/test';
  await extract('test.zip', { dir: target }); // Sensitive
}
main();

Compliant Solution

For tar module:

const tar = require('tar');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;

tar.x({
  file: 'foo.tar.gz',
  filter: (path, entry) => {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    totalSize += entry.size;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    return true;
  }
});

For adm-zip module:

const AdmZip = require('adm-zip');
const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

let fileCount = 0;
let totalSize = 0;
let zip = new AdmZip("./foo.zip");
let zipEntries = zip.getEntries();
zipEntries.forEach(function(zipEntry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
    }

    let entrySize = zipEntry.getData().length;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
    }

    let compressionRatio = entrySize / zipEntry.header.compressedSize;
    if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
    }

    if (!zipEntry.isDirectory) {
        zip.extractEntryTo(zipEntry.entryName, ".");
    }
});

For jszip module:

const fs = require("fs");
const pathmodule = require("path");
const JSZip = require("jszip");

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB

let fileCount = 0;
let totalSize = 0;
let targetDirectory = __dirname + '/archive_tmp';

fs.readFile("foo.zip", function(err, data) {
  if (err) throw err;
  JSZip.loadAsync(data).then(function (zip) {
    zip.forEach(function (relativePath, zipEntry) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // Prevent ZipSlip path traversal (S6096)
      const resolvedPath = pathmodule.join(targetDirectory, zipEntry.name);
      if (!resolvedPath.startsWith(targetDirectory)) {
        throw 'Path traversal detected';
      }

      if (!zip.file(zipEntry.name)) {
        fs.mkdirSync(resolvedPath);
      } else {
        zip.file(zipEntry.name).async('nodebuffer').then(function (content) {
          totalSize += content.length;
          if (totalSize > MAX_SIZE) {
            throw 'Reached max. size';
          }

          fs.writeFileSync(resolvedPath, content);
        });
      }
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For yauzl module

const yauzl = require('yauzl');

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

yauzl.open('foo.zip', function (err, zipfile) {
  if (err) throw err;

  let fileCount = 0;
  let totalSize = 0;

  zipfile.on("entry", function(entry) {
    fileCount++;
    if (fileCount > MAX_FILES) {
      throw 'Reached max. number of files';
    }

    // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
    // Alternatively, calculate the size from the readStream.
    let entrySize = entry.uncompressedSize;
    totalSize += entrySize;
    if (totalSize > MAX_SIZE) {
      throw 'Reached max. size';
    }

    if (entry.compressedSize > 0) {
      let compressionRatio = entrySize / entry.compressedSize;
      if (compressionRatio > THRESHOLD_RATIO) {
        throw 'Reached max. compression ratio';
      }
    }

    zipfile.openReadStream(entry, function(err, readStream) {
      if (err) throw err;
      // TODO: extract
    });
  });
});

Be aware that due to the similar structure of sensitive and compliant code the issue will be raised in both cases. It is up to the developer to decide if the implementation is secure.

For extract-zip module:

const extract = require('extract-zip')

const MAX_FILES = 10000;
const MAX_SIZE = 1000000000; // 1 GB
const THRESHOLD_RATIO = 10;

async function main() {
  let fileCount = 0;
  let totalSize = 0;

  let target = __dirname + '/foo';
  await extract('foo.zip', {
    dir: target,
    onEntry: function(entry, zipfile) {
      fileCount++;
      if (fileCount > MAX_FILES) {
        throw 'Reached max. number of files';
      }

      // The uncompressedSize comes from the zip headers, so it might not be trustworthy.
      // Alternatively, calculate the size from the readStream.
      let entrySize = entry.uncompressedSize;
      totalSize += entrySize;
      if (totalSize > MAX_SIZE) {
        throw 'Reached max. size';
      }

      if (entry.compressedSize > 0) {
        let compressionRatio = entrySize / entry.compressedSize;
        if (compressionRatio > THRESHOLD_RATIO) {
          throw 'Reached max. compression ratio';
        }
      }
    }
  });
}
main();

See

javascript:S6245

This rule is deprecated, and will eventually be removed.

Server-side encryption (SSE) encrypts an object (not the metadata) as it is written to disk (where the S3 bucket resides) and decrypts it as it is read from disk. This doesn’t change the way the objects are accessed, as long as the user has the necessary permissions, objects are retrieved as if they were unencrypted. Thus, SSE only helps in the event of disk thefts, improper disposals of disks and other attacks on the AWS infrastructure itself.

There are three SSE options:

  • Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
    • AWS manages encryption keys and the encryption itself (with AES-256) on its own.
  • Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
    • AWS manages the encryption (AES-256) of objects and encryption keys provided by the AWS KMS service.
  • Server-Side Encryption with Customer-Provided Keys (SSE-C)
    • AWS manages only the encryption (AES-256) of objects with encryption keys provided by the customer. AWS doesn’t store the customer’s encryption keys.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure needs to comply to some regulations, like HIPAA or PCI DSS, and other standards.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to use SSE. Choosing the appropriate option depends on the level of control required for the management of encryption keys.

Sensitive Code Example

Server-side encryption is not used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'default'
}); // Sensitive

Bucket encryption is disabled by default.

Compliant Solution

Server-side encryption with Amazon S3-Managed Keys is used:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED
});

# Alternatively with a KMS key managed by the user.

new s3.Bucket(this, 'id', {
    encryption: s3.BucketEncryption.KMS_MANAGED,
    encryptionKey: access_key
});

See

javascript:S6249

By default, S3 buckets can be accessed through HTTP and HTTPs protocols.

As HTTP is a clear-text protocol, it lacks the encryption of transported data, as well as the capability to build an authenticated connection. It means that a malicious actor who is able to intercept traffic from the network can read, modify or corrupt the transported content.

Ask Yourself Whether

  • The S3 bucket stores sensitive information.
  • The infrastructure has to comply with AWS Foundational Security Best Practices standard.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enforce HTTPS only access by setting enforceSSL property to true

Sensitive Code Example

S3 bucket objects access through TLS is not enforced by default:

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example'); // Sensitive

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

const bucket = new s3.Bucket(this, 'example', {
    bucketName: 'example',
    versioned: true,
    publicReadAccess: false,
    enforceSSL: true
});

See

javascript:S6252

S3 buckets can be versioned. When the S3 bucket is unversioned it means that a new version of an object overwrites an existing one in the S3 bucket.

It can lead to unintentional or intentional information loss.

Ask Yourself Whether

  • The bucket stores information that require high availability.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to enable S3 versioning and thus to have the possibility to retrieve and restore different versions of an object.

Sensitive Code Example

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: false // Sensitive
});

The default value of versioned is false so the absence of this parameter is also sensitive.

Compliant Solution

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    versioned: true
});

See

javascript:S6270

Resource-based policies granting access to all users can lead to information leakage.

Ask Yourself Whether

  • The AWS resource stores or processes sensitive data.
  • The AWS resource is designed to be private.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to implement the least privilege principle, i.e. to grant necessary permissions only to users for their required tasks. In the context of resource-based policies, list the principals that need the access and grant to them only the required privileges.

Sensitive Code Example

This policy allows all users, including anonymous ones, to access an S3 bucket:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AnyPrincipal()] // Sensitive
}))

Compliant Solution

This policy allows only the authorized users:

import { aws_iam as iam } from 'aws-cdk-lib'
import { aws_s3 as s3 } from 'aws-cdk-lib'

const bucket = new s3.Bucket(this, "ExampleBucket")

bucket.addToResourcePolicy(new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["s3:*"],
    resources: [bucket.arnForObjects("*")],
    principals: [new iam.AccountRootPrincipal()]
}))

See

javascript:S6275

Amazon Elastic Block Store (EBS) is a block-storage service for Amazon Elastic Compute Cloud (EC2). EBS volumes can be encrypted, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. In the case that adversaries gain physical access to the storage medium they are not able to access the data. Encryption can be enabled for specific volumes or for all new volumes and snapshots. Volumes created from snapshots inherit their encryption configuration. A volume created from an encrypted snapshot will also be encrypted by default.

Ask Yourself Whether

  • The disk contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt EBS volumes that contain sensitive information. Encryption and decryption are handled transparently by EC2, so no further modifications to the application are necessary. Instead of enabling encryption for every volume, it is also possible to enable encryption globally for a specific region. While creating volumes from encrypted snapshots will result in them being encrypted, explicitly enabling this security parameter will prevent any future unexpected security downgrade.

Sensitive Code Example

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-explicit', {
      availabilityZone: 'us-west-2a',
      size: Size.gibibytes(1),
      encrypted: false // Sensitive
    });
import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'unencrypted-implicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
    }); // Sensitive as encryption is disabled by default

Compliant Solution

For aws_cdk.aws_ec2.Volume:

import { Size } from 'aws-cdk-lib';
import { Volume } from 'aws-cdk-lib/aws-ec2';

new Volume(this, 'encrypted-explicit', {
      availabilityZone: 'eu-west-1a',
      size: Size.gibibytes(1),
      encrypted: true
    });

See

javascript:S2817

This rule is deprecated, and will eventually be removed.

Why is this an issue?

The Web SQL Database standard never saw the light of day. It was first formulated, then deprecated by the W3C and was only implemented in some browsers. (It is not supported in Firefox or IE.)

Further, the use of a Web SQL Database poses security concerns, since you only need its name to access such a database.

Noncompliant code example

var db = window.openDatabase("myDb", "1.0", "Personal secrets stored here", 2*1024*1024);  // Noncompliant

Resources

javascript:S2819

Cross-origin communication allows different websites to interact with each other. This interaction is typically achieved through mechanisms like AJAX requests, WebSockets, or postMessage API. However, a vulnerability can arise when these communications are not properly secured by verifying their origins.

Why is this an issue?

Without origin verification, the target website cannot distinguish between legitimate requests from its own pages and malicious requests from an attacker’s site. The attacker can craft a malicious website or script that sends requests to a target website where the user is already authenticated.

This vulnerability class is not about a single specific user input or action, but rather a series of actions that lead to an insecure cross-origin communication.

What is the potential impact?

The absence of origin verification during cross-origin communications can lead to serious security issues.

Data Breach

If an attacker can successfully exploit this vulnerability, they may gain unauthorized access to sensitive data. For instance, a user’s personal information, financial details, or other confidential data could be exposed. This not only compromises the user’s privacy but can also lead to identity theft or financial loss.

Unauthorized Actions

An attacker could manipulate the communication between websites to perform actions on behalf of the user without their knowledge. This could range from making unauthorized purchases to changing user settings or even deleting accounts.

How to fix it

When sending a message, avoid using * for the target origin (it means no preference). Instead define it explicitly so the message will only be dispatched to this URI. When receiving the message, verify the orgin to be sure that it is sent by an authorized sender.

Code examples

Noncompliant code example

When sending a message:

var iframe = document.getElementById("testiframe");
iframe.contentWindow.postMessage("hello", "*"); // Noncompliant: * is used

When receiving a message:

window.addEventListener("message", function(event) { // Noncompliant: no checks are done on the origin property.
  console.log(event.data);
 });

Compliant solution

When sending a message:

var iframe = document.getElementById("testiframe");
iframe.contentWindow.postMessage("hello", "https://secure.example.com");

When receiving a message:

window.addEventListener("message", function(event) {
  if (event.origin !== "http://example.org")
    return;

  console.log(event.data)
});

Resources

Documentation

Standards

javascript:S6281

By default S3 buckets are private, it means that only the bucket owner can access it.

This access control can be relaxed with ACLs or policies.

To prevent permissive policies or ACLs to be set on a S3 bucket the following booleans settings can be enabled:

  • blockPublicAcls: to block or not public ACLs to be set to the S3 bucket.
  • ignorePublicAcls: to consider or not existing public ACLs set to the S3 bucket.
  • blockPublicPolicy: to block or not public policies to be set to the S3 bucket.
  • restrictPublicBuckets: to restrict or not the access to the S3 endpoints of public policies to the principals within the bucket owner account.

The other attribute BlockPublicAccess.BLOCK_ACLS only turns on blockPublicAcls and ignorePublicAcls. The public policies can still affect the S3 bucket.

However, all of those options can be enabled by setting the blockPublicAccess property of the S3 bucket to BlockPublicAccess.BLOCK_ALL.

Ask Yourself Whether

  • The S3 bucket stores sensitive data.
  • The S3 bucket is not used to store static resources of websites (images, css …​).
  • Many users have the permission to set ACL or policy to the S3 bucket.
  • These settings are not already enforced to true at the account level.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to configure:

  • blockPublicAcls to True to block new attempts to set public ACLs.
  • ignorePublicAcls to True to block existing public ACLs.
  • blockPublicPolicy to True to block new attempts to set public policies.
  • restrictPublicBuckets to True to restrict existing public policies.

Sensitive Code Example

By default, when not set, the blockPublicAccess is fully deactivated (nothing is blocked):

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket'
}); // Sensitive

This block_public_access allows public ACL to be set:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : false, // Sensitive
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

The attribute BLOCK_ACLS only blocks and ignores public ACLs:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ACLS // Sensitive
});

Compliant Solution

This blockPublicAccess blocks public ACLs and policies, ignores existing public ACLs and restricts existing public policies:

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL
});

A similar configuration to the one above can be obtained by setting all parameters of the blockPublicAccess

const s3 = require('aws-cdk-lib/aws-s3');

new s3.Bucket(this, 'id', {
    bucketName: 'bucket',
    blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls         : true,
        blockPublicPolicy       : true,
        ignorePublicAcls        : true,
        restrictPublicBuckets   : true
    })
});

See

javascript:S2068

Because it is easy to extract strings from an application source code or binary, credentials should not be hard-coded. This is particularly true for applications that are distributed or that are open-source.

In the past, it has led to the following vulnerabilities:

Credentials should be stored outside of the code in a configuration file, a database, or a management service for secrets.

This rule flags instances of hard-coded credentials used in database and LDAP connections. It looks for hard-coded credentials in connection strings, and for variable names that match any of the patterns from the provided list.

It’s recommended to customize the configuration of this rule with additional credential words such as "oauthToken", "secret", …​

Ask Yourself Whether

  • Credentials allow access to a sensitive component like a database, a file storage, an API or a service.
  • Credentials are used in production environments.
  • Application re-distribution is required before updating the credentials.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Store the credentials in a configuration file that is not pushed to the code repository.
  • Store the credentials in a database.
  • Use your cloud provider’s service for managing secrets.
  • If a password has been disclosed through the source code: change it.

Sensitive Code Example

var mysql = require('mysql');

var connection = mysql.createConnection(
{
  host:'localhost',
  user: "admin",
  database: "project",
  password: "mypassword", // sensitive
  multipleStatements: true
});

connection.connect();

Compliant Solution

var mysql = require('mysql');

var connection = mysql.createConnection({
  host: process.env.MYSQL_URL,
  user: process.env.MYSQL_USERNAME,
  password: process.env.MYSQL_PASSWORD,
  database: process.env.MYSQL_DATABASE
});
connection.connect();

See

javascript:S5332

Clear-text protocols such as ftp, telnet, or http lack encryption of transported data, as well as the capability to build an authenticated connection. It means that an attacker able to sniff traffic from the network can read, modify, or corrupt the transported content. These protocols are not secure as they expose applications to an extensive range of risks:

  • sensitive data exposure
  • traffic redirected to a malicious endpoint
  • malware-infected software update or installer
  • execution of client-side code
  • corruption of critical information

Even in the context of isolated networks like offline environments or segmented cloud environments, the insider threat exists. Thus, attacks involving communications being sniffed or tampered with can still happen.

For example, attackers could successfully compromise prior security layers by:

  • bypassing isolation mechanisms
  • compromising a component of the network
  • getting the credentials of an internal IAM account (either from a service account or an actual person)

In such cases, encrypting communications would decrease the chances of attackers to successfully leak data or steal credentials from other network components. By layering various security practices (segmentation and encryption, for example), the application will follow the defense-in-depth principle.

Note that using the http protocol is being deprecated by major web browsers.

In the past, it has led to the following vulnerabilities:

Ask Yourself Whether

  • Application data needs to be protected against falsifications or leaks when transiting over the network.
  • Application data transits over an untrusted network.
  • Compliance rules require the service to encrypt data in transit.
  • Your application renders web pages with a relaxed mixed content policy.
  • OS-level protections against clear-text traffic are deactivated.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Make application data transit over a secure, authenticated and encrypted protocol like TLS or SSH. Here are a few alternatives to the most common clear-text protocols:
    • Use ssh as an alternative to telnet.
    • Use sftp, scp, or ftps instead of ftp.
    • Use https instead of http.
    • Use SMTP over SSL/TLS or SMTP with STARTTLS instead of clear-text SMTP.
  • Enable encryption of cloud components communications whenever it is possible.
  • Configure your application to block mixed content when rendering web pages.
  • If available, enforce OS-level deactivation of all clear-text traffic.

It is recommended to secure all transport channels, even on local networks, as it can take a single non-secure connection to compromise an entire application or system.

Sensitive Code Example

url = "http://example.com"; // Sensitive
url = "ftp://anonymous@example.com"; // Sensitive
url = "telnet://anonymous@example.com"; // Sensitive

For nodemailer:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: false, // Sensitive
  requireTLS: false // Sensitive
});
const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({}); // Sensitive

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': false // Sensitive
});

For telnet-client:

const Telnet = require('telnet-client'); // Sensitive

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-http-default', {
  port: 8080,
  open: true
}); // Sensitive

alb.addListener('listener-http-explicit', {
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-http-explicit-constructor', {
  loadBalancer: alb,
  protocol: ApplicationProtocol.HTTP, // Sensitive
  port: 8080,
  open: true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

var listenerNLB = nlb.addListener('listener-tcp-default', {
  port: 1234
}); // Sensitive

listenerNLB = nlb.addListener('listener-tcp-explicit', {
  protocol: Protocol.TCP, // Sensitive
  port: 1234
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tcp-explicit-constructor', {
  loadBalancer: nlb,
  protocol: Protocol.TCP, // Sensitive
  port: 8080
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-http', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "HTTP", // Sensitive
  port: 80
});

new CfnListener(this, 'listener-tcp', {
  defaultActions: defaultActions,
  loadBalancerArn: alb.loadBalancerArn,
  protocol: "TCP", // Sensitive
  port: 80
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-tcp', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'tcp' // Sensitive
  }]
});

new CfnLoadBalancer(this, 'elb-http', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'http' // Sensitive
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

const loadBalancer = new LoadBalancer(this, 'elb-tcp-dict', {
    vpc,
    internetFacing: true,
    healthCheck: {
    port: 80,
    },
    listeners: [
    {
        externalPort:10000,
        externalProtocol: LoadBalancingProtocol.TCP, // Sensitive
        internalPort:10000
    }]
});

loadBalancer.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.TCP, // Sensitive
  internalPort:10001
});
loadBalancer.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTP, // Sensitive
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'unencrypted-implicit', {
  replicationGroupDescription: 'exampleDescription'
}); // Sensitive

new CfnReplicationGroup(this, 'unencrypted-explicit', {
  replicationGroupDescription: 'exampleDescription',
  transitEncryptionEnabled: false // Sensitive
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-implicit-unencrytped', undefined); // Sensitive

new CfnStream(this, 'cfnstream-explicit-unencrytped', {
  streamEncryption: undefined // Sensitive
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-explicit-unencrypted', {
  encryption: StreamEncryption.UNENCRYPTED // Sensitive
});

Compliant Solution

url = "https://example.com";
url = "sftp://anonymous@example.com";
url = "ssh://anonymous@example.com";

For nodemailer one of the following options must be set:

const nodemailer = require("nodemailer");
let transporter = nodemailer.createTransport({
  secure: true,
  requireTLS: true,
  port: 465,
  secured: true
});

For ftp:

var Client = require('ftp');
var c = new Client();
c.connect({
  'secure': true
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationLoadBalancer:

import { ApplicationLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const alb = new ApplicationLoadBalancer(this, 'ALB', {
  vpc: vpc,
  internetFacing: true
});

alb.addListener('listener-https-explicit', {
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

alb.addListener('listener-https-implicit', {
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.ApplicationListener:

import { ApplicationListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new ApplicationListener(this, 'listener-https-explicit', {
  loadBalancer: loadBalancer,
  protocol: ApplicationProtocol.HTTPS,
  port: 8080,
  open: true,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkLoadBalancer:

import { NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

const nlb = new NetworkLoadBalancer(this, 'nlb', {
  vpc: vpc,
  internetFacing: true
});

nlb.addListener('listener-tls-explicit', {
  protocol: Protocol.TLS,
  port: 1234,
  certificates: [certificate]
});

nlb.addListener('listener-tls-implicit', {
  port: 1234,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.NetworkListener:

import { NetworkListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new NetworkListener(this, 'listener-tls-explicit', {
  loadBalancer: loadBalancer,
  protocol: Protocol.TLS,
  port: 8080,
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancingv2.CfnListener:

import { CfnListener } from 'aws-cdk-lib/aws-elasticloadbalancingv2';

new CfnListener(this, 'listener-https', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "HTTPS",
  port: 80
  certificates: [certificate]
});

new CfnListener(this, 'listener-tls', {
  defaultActions: defaultActions,
  loadBalancerArn: loadBalancerArn,
  protocol: "TLS",
  port: 80
  certificates: [certificate]
});

For aws-cdk-lib.aws-elasticloadbalancing.CfnLoadBalancer:

import { CfnLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancing';

new CfnLoadBalancer(this, 'elb-ssl', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'ssl',
    sslCertificateId: sslCertificateId
  }]
});

new CfnLoadBalancer(this, 'elb-https', {
  listeners: [{
    instancePort: '1000',
    loadBalancerPort: '1000',
    protocol: 'https',
    sslCertificateId: sslCertificateId
  }]
});

For aws-cdk-lib.aws-elasticloadbalancing.LoadBalancer:

import { LoadBalancer, LoadBalancingProtocol } from 'aws-cdk-lib/aws-elasticloadbalancing';

const lb = new LoadBalancer(this, 'elb-ssl', {
  vpc,
  internetFacing: true,
  healthCheck: {
    port: 80,
  },
  listeners: [
    {
      externalPort:10000,
      externalProtocol:LoadBalancingProtocol.SSL,
      internalPort:10000
    }]
});

lb.addListener({
  externalPort:10001,
  externalProtocol:LoadBalancingProtocol.SSL,
  internalPort:10001
});
lb.addListener({
  externalPort:10002,
  externalProtocol:LoadBalancingProtocol.HTTPS,
  internalPort:10002
});

For aws-cdk-lib.aws-elasticache.CfnReplicationGroup:

import { CfnReplicationGroup } from 'aws-cdk-lib/aws-elasticache';

new CfnReplicationGroup(this, 'encrypted-explicit', {
  replicationGroupDescription: 'example',
  transitEncryptionEnabled: true
});

For aws-cdk-lib.aws-kinesis.Stream:

import { Stream } from 'aws-cdk-lib/aws-kinesis';

new Stream(this, 'stream-implicit-encrypted');

new Stream(this, 'stream-explicit-encrypted-selfmanaged', {
  encryption: StreamEncryption.KMS,
  encryptionKey: encryptionKey,
});

new Stream(this, 'stream-explicit-encrypted-managed', {
  encryption: StreamEncryption.MANAGED
});

For aws-cdk-lib.aws-kinesis.CfnStream:

import { CfnStream } from 'aws-cdk-lib/aws-kinesis';

new CfnStream(this, 'cfnstream-explicit-encrypted', {
  streamEncryption: {
    encryptionType: encryptionType,
    keyId: encryptionKey.keyId,
  }
});

Exceptions

No issue is reported for the following cases because they are not considered sensitive:

  • Insecure protocol scheme followed by loopback addresses like 127.0.0.1 or localhost.

See

javascript:S6299

Vue.js framework prevents XSS vulnerabilities by automatically escaping HTML contents with the use of native API browsers like innerText instead of innerHtml.

It’s still possible to explicity use innerHtml and similar APIs to render HTML. Accidentally rendering malicious HTML data will introduce an XSS vulnerability in the application and enable a wide range of serious attacks like accessing/modifying sensitive information or impersonating other users.

Ask Yourself Whether

The application needs to render HTML content which:

  • could be user-controlled and not previously sanitized.
  • is difficult to understand how it was constructed.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Avoid injecting HTML content with v-html directive unless the content can be considered 100% safe, instead try to rely as much as possible on built-in auto-escaping Vue.js features.
  • Take care when using the v-bind:href directive to set URLs which can contain malicious Javascript (javascript:onClick(...)).
  • Event directives like :onmouseover are also prone to Javascript injection and should not be used with unsafe values.

Sensitive Code Example

When using Vue.js templates, the v-html directive enables HTML rendering without any sanitization:

<div v-html="htmlContent"></div> <!-- Noncompliant -->

When using a rendering function, the innerHTML attribute enables HTML rendering without any sanitization:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerHTML: this.htmlContent, // Noncompliant
        }
      }
    );
  },
});

When using JSX, the domPropsInnerHTML attribute enables HTML rendering without any sanitization:

<div domPropsInnerHTML={this.htmlContent}></div> <!-- Noncompliant -->

Compliant Solution

When using Vue.js templates, putting the content as a child node of the element is safe:

<div>{{ htmlContent }}</div>

When using a rendering function, using the innerText attribute or putting the content as a child node of the element is safe:

Vue.component('element', {
  render: function (createElement) {
    return createElement(
      'div',
      {
        domProps: {
          innerText: this.htmlContent,
        }
      },
      this.htmlContent // Child node
    );
  },
});

When using JSX, putting the content as a child node of the element is safe:

<div>{this.htmlContent}</div>

See

javascript:S6303

Using unencrypted RDS DB resources exposes data to unauthorized access.
This includes database data, logs, automatic backups, read replicas, snapshots, and cluster metadata.

This situation can occur in a variety of scenarios, such as:

  • A malicious insider working at the cloud provider gains physical access to the storage device.
  • Unknown attackers penetrate the cloud provider’s logical infrastructure and systems.

After a successful intrusion, the underlying applications are exposed to:

  • theft of intellectual property and/or personal data
  • extortion
  • denial of services and security bypasses via data corruption or deletion

AWS-managed encryption at rest reduces this risk with a simple switch.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to enable encryption at rest on any RDS DB resource, regardless of the engine.
In any case, no further maintenance is required as encryption at rest is fully managed by AWS.

Sensitive Code Example

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';
import { aws_ec2 as ec2 } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: false, // Sensitive
});

Compliant Solution

For aws-cdk-lib.aws_rds.CfnDBCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBCluster(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.CfnDBInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

new rds.CfnDBInstance(this, 'example', {
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseCluster:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

const cluster = new rds.DatabaseCluster(this, 'example', {
  engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_2_08_1 }),
  instanceProps: {
    vpcSubnets: {
      subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
    },
    vpc,
  },
  storageEncrypted: false, // Sensitive
});

For aws-cdk-lib.aws_rds.DatabaseClusterFromSnapshot:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseClusterFromSnapshot(this, 'example', {
  engine: rds.DatabaseClusterEngine.aurora({ version: rds.AuroraEngineVersion.VER_1_22_2 }),
  instanceProps: {
    vpc,
  },
  snapshotIdentifier: 'exampleSnapshot',
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstance:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const vpc: ec2.Vpc;

new rds.DatabaseInstance(this, 'example', {
  engine: rds.DatabaseInstanceEngine.POSTGRES,
  vpc,
  storageEncrypted: true,
});

For aws-cdk-lib.aws_rds.DatabaseInstanceReadReplica:

import { aws_rds as rds } from 'aws-cdk-lib';

declare const sourceInstance: rds.DatabaseInstance;

new rds.DatabaseInstanceReadReplica(this, 'example', {
  sourceDatabaseInstance: sourceInstance,
  instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.LARGE),
  vpc,
  storageEncrypted: true,
});

See

javascript:S6304

A policy that allows identities to access all resources in an AWS account may violate the principle of least privilege. Suppose an identity has permission to access all resources even though it only requires access to some non-sensitive ones. In this case, unauthorized access and disclosure of sensitive information will occur.

Ask Yourself Whether

The AWS account has more than one resource with different levels of sensitivity.

A risk exists if you answered yes to this question.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e., by only granting access to necessary resources. A good practice to achieve this is to organize or tag resources depending on the sensitivity level of data they store or process. Therefore, managing a secure access control is less prone to errors.

Sensitive Code Example

The wildcard "*" is specified as the resource for this PolicyStatement. This grants the update permission for all policies of the account:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["*"] // Sensitive
        })
    ]
})

Compliant Solution

Restrict the update permission to the appropriate subset of policies:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [
        new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: ["iam:CreatePolicyVersion"],
            resources: ["arn:aws:iam:::policy/team1/*"]
        })
    ]
})

Exceptions

  • Should not be raised on key policies (when AWS KMS actions are used.)
  • Should not be raised on policies not using any resources (if and only if all actions in the policy never require resources.)

See

javascript:S5691

Hidden files are created automatically by many tools to save user-preferences, well-known examples are .profile, .bashrc, .bash_history or .git. To simplify the view these files are not displayed by default using operating system commands like ls.

Outside of the user environment, hidden files are sensitive because they are used to store privacy-related information or even hard-coded secrets.

Ask Yourself Whether

  • Hidden files may have been inadvertently uploaded to the static server’s public directory and it accepts requests to hidden files.
  • There is no business use cases linked to serve files in .name format but the server is not configured to reject requests to this type of files.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • Disable the serving of hidden files.

Sensitive Code Example

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'allow'});   // Sensitive
app.use(serveStaticMiddleware);

Compliant Solution

Express.js serve-static middleware:

let serveStatic = require("serve-static");
let app = express();
let serveStaticMiddleware = serveStatic('public', { 'index': false, 'dotfiles': 'ignore'});   // Compliant: ignore or deny are recommended values
let serveStaticDefault = serveStatic('public', { 'index': false});   // Compliant: by default, "dotfiles" (file or directory that begins with a dot) are not served (with the exception that files within a directory that begins with a dot are not ignored), see serve-static module documentation
app.use(serveStaticMiddleware);

See

javascript:S5693

Rejecting requests with significant content length is a good practice to control the network traffic intensity and thus resource consumption in order to prevent DoS attacks.

Ask Yourself Whether

  • size limits are not defined for the different resources of the web application.
  • the web application is not protected by rate limiting features.
  • the web application infrastructure has limited resources.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

  • For most of the features of an application, it is recommended to limit the size of requests to:
    • lower or equal to 8mb for file uploads.
    • lower or equal to 2mb for other requests.

It is recommended to customize the rule with the limit values that correspond to the web application.

Sensitive Code Example

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB

const formDefault = new Formidable(); // Sensitive, the default value is 200MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
    fileSize: 10000000; // Sensitive: 10MB is more than the recommended limit of 8MB
  }
});

let diskUploadUnlimited = multer({ // Sensitive: the default value is no limit
  storage: diskStorage,
});

body-parser module:

// 4MB is more than the recommended limit of 2MB for non-file-upload requests
let jsonParser = bodyParser.json({ limit: "4mb" }); // Sensitive
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "4mb" }); // Sensitive

Compliant Solution

formidable file upload module:

const form = new Formidable();
form.maxFileSize = 8000000; // Compliant: 8MB

multer (Express.js middleware) file upload module:

let diskUpload = multer({
  storage: diskStorage,
  limits: {
     fileSize: 8000000 // Compliant: 8MB
  }
});

body-parser module:

let jsonParser = bodyParser.json(); // Compliant, when the limit is not defined, the default value is set to 100kb
let urlencodedParser = bodyParser.urlencoded({ extended: false, limit: "2mb" }); // Compliant

See

javascript:S6302

A policy that grants all permissions may indicate an improper access control, which violates the principle of least privilege. Suppose an identity is granted full permissions to a resource even though it only requires read permission to work as expected. In this case, an unintentional overwriting of resources may occur and therefore result in loss of information.

Ask Yourself Whether

Identities obtaining all the permissions:

  • only require a subset of these permissions to perform the intended function.
  • have monitored activity showing that only a subset of these permissions is actually used.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to apply the least privilege principle, i.e. by only granting the necessary permissions to identities. A good practice is to start with the very minimum set of permissions and to refine the policy over time. In order to fix overly permissive policies already deployed in production, a strategy could be to review the monitored activity in order to reduce the set of permissions to those most used.

Sensitive Code Example

A customer-managed policy that grants all permissions by using the wildcard (*) in the Action property:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["*"], // Sensitive
    resources: ["arn:aws:iam:::user/*"],
})

Compliant Solution

A customer-managed policy that grants only the required permissions:

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyStatement({
    effect: iam.Effect.ALLOW,
    actions: ["iam:GetAccountSummary"],
    resources: ["arn:aws:iam:::user/*"],
})

See

javascript:S6308

Amazon OpenSearch Service is a managed service to host OpenSearch instances. It replaces Elasticsearch Service, which has been deprecated.

To harden domain (cluster) data in case of unauthorized access, OpenSearch provides data-at-rest encryption if the engine is OpenSearch (any version), or Elasticsearch with a version of 5.1 or above. Enabling encryption at rest will help protect:

  • indices
  • logs
  • swap files
  • data in the application directory
  • automated snapshots

Thus, adversaries cannot access the data if they gain physical access to the storage medium.

Ask Yourself Whether

  • The database contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It is recommended to encrypt OpenSearch domains that contain sensitive information.

OpenSearch handles encryption and decryption transparently, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
}); // Sensitive, encryption must be explicitly enabled

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
}); // Sensitive, encryption must be explicitly enabled

Compliant Solution

For aws-cdk-lib.aws_opensearchservice.Domain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleDomain = new opensearchservice.Domain(this, 'ExampleDomain', {
  version: EngineVersion.OPENSEARCH_1_3,
  encryptionAtRest: {
    enabled: true,
  },
});

For aws-cdk-lib.aws_opensearchservice.CfnDomain:

import { aws_opensearchservice as opensearchservice } from 'aws-cdk-lib';

const exampleCfnDomain = new opensearchservice.CfnDomain(this, 'ExampleCfnDomain', {
  engineVersion: 'OpenSearch_1.3',
  encryptionAtRestOptions: {
    enabled: true,
  },
});

See

javascript:S2077

Formatted SQL queries can be difficult to maintain, debug and can increase the risk of SQL injection when concatenating untrusted values into the query. However, this rule doesn’t detect SQL injections (unlike rule S3649), the goal is only to highlight complex/formatted queries.

Ask Yourself Whether

  • Some parts of the query come from untrusted values (like user inputs).
  • The query is repeated/duplicated in other parts of the code.
  • The application must support different types of relational databases.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

Sensitive Code Example

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT * FROM users WHERE id = ' + userinput, (err, res) => {}); // Sensitive

Compliant Solution

// === MySQL ===
const mysql = require('mysql');
const mycon = mysql.createConnection({ host: host, user: user, password: pass, database: db });
mycon.connect(function(err) {
  mycon.query('SELECT name FROM users WHERE id = ?', [userinput], (err, res) => {});
});

// === PostgreSQL ===
const pg = require('pg');
const pgcon = new pg.Client({ host: host, user: user, password: pass, database: db });
pgcon.connect();
pgcon.query('SELECT name FROM users WHERE id = $1', [userinput], (err, res) => {});

Exceptions

This rule’s current implementation does not follow variables. It will only detect SQL queries which are formatted directly in the function call.

const sql = 'SELECT * FROM users WHERE id = ' + userinput;
mycon.query(sql, (err, res) => {}); // Sensitive but no issue is raised.

See

javascript:S6317

Within IAM, identity-based policies grant permissions to users, groups, or roles, and enable specific actions to be performed on designated resources. When an identity policy inadvertently grants more privileges than intended, certain users or roles might be able to perform more actions than expected. This can lead to potential security risks, as it enables malicious users to escalate their privileges from a lower level to a higher level of access.

Why is this an issue?

AWS Identity and Access Management (IAM) is the service that defines access to AWS resources. One of the core components of IAM is the policy which, when attached to an identity or a resource, defines its permissions. Policies granting permission to an identity (a user, a group or a role) are called identity-based policies. They add the ability to an identity to perform a predefined set of actions on a list of resources.

For such policies, it is easy to define very broad permissions (by using wildcard "*" permissions for example.) This is especially true if it is not yet clear which permissions will be required for a specific workload or use case. However, it is important to limit the amount of permissions that are granted and the amount of resources to which these permissions are granted. Doing so ensures that there are no users or roles that have more permissions than they need.

If this is not done, it can potentially carry security risks in the case that an attacker gets access to one of these identities.

What is the potential impact?

AWS IAM policies that contain overly broad permissions can lead to privilege escalation by granting users more access than necessary. They may be able to perform actions beyond their intended scope.

Privilege escalation

When IAM policies are too permissive, they grant users more privileges than necessary, allowing them to perform actions that they should not be able to. This can be exploited by attackers to gain unauthorized access to sensitive resources and perform malicious activities.

For example, if an IAM policy grants a user unrestricted access to all S3 buckets in an AWS account, the user can potentially read, write, and delete any object within those buckets. If an attacker gains access to this user’s credentials, they can exploit this overly permissive policy to exfiltrate sensitive data, modify or delete critical files, or even launch further attacks within the AWS environment. This can have severe consequences, such as data breaches, service disruptions, or unauthorized access to other resources within the AWS account.

How to fix it in AWS CDK

Code examples

In this example, the IAM policy allows an attacker to update the code of any Lambda function. An attacker can achieve privilege escalation by altering the code of a Lambda that executes with high privileges.

Noncompliant code example

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["*"], // Noncompliant
    })],
});

Compliant solution

The policy is narrowed such that only updates to the code of certain Lambda functions (without high privileges) are allowed.

import { aws_iam as iam } from 'aws-cdk-lib'

new iam.PolicyDocument({
    statements: [new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: ["lambda:UpdateFunctionCode"],
        resources: ["arn:aws:lambda:us-east-2:123456789012:function:my-function:1"],
    })],
});

How does this work?

Principle of least privilege

When creating IAM policies, it is important to adhere to the principle of least privilege. This means that any user or role should only be granted enough permissions to perform the tasks that they are supposed to, and nothing else.

To successfully implement this, it is easier to start from nothing and gradually build up all the needed permissions. When starting from a policy with overly broad permissions which is made stricter at a later time, it can be harder to ensure that there are no gaps that might be forgotten about. In this case, it might be useful to monitor the users or roles to verify which permissions are used.

Resources

Documentation

Articles & blog posts

Standards

javascript:S6319

Amazon SageMaker is a managed machine learning service in a hosted production-ready environment. To train machine learning models, SageMaker instances can process potentially sensitive data, such as personal information that should not be stored unencrypted. In the event that adversaries physically access the storage media, they cannot decrypt encrypted data.

Ask Yourself Whether

  • The instance contains sensitive data that could cause harm when leaked.
  • There are compliance requirements for the service to store data encrypted.

There is a risk if you answered yes to any of those questions.

Recommended Secure Coding Practices

It’s recommended to encrypt SageMaker notebook instances that contain sensitive information. Encryption and decryption are handled transparently by SageMaker, so no further modifications to the application are necessary.

Sensitive Code Example

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

new CfnNotebookInstance(this, 'example', {
      instanceType: 'instanceType',
      roleArn: 'roleArn'
}); // Sensitive

Compliant Solution

For aws-cdk-lib.aws-sagemaker.CfnNotebookInstance

import { CfnNotebookInstance } from 'aws-cdk-lib/aws-sagemaker';

const encryptionKey = new Key(this, 'example', {
    enableKeyRotation: true,
});
new CfnNotebookInstance(this, 'example', {
    instanceType: 'instanceType',
    roleArn: 'roleArn',
    kmsKeyId: encryptionKey.keyId
});

See

javascript:S5689

Disclosure of version information, usually overlooked by developers but disclosed by default by the systems and frameworks in use, can pose a significant security risk depending on the production environement.

Once this information is public, attackers can use it to identify potential security holes or vulnerabilities specific to that version.

Furthermore, if the published version information indicates the use of outdated or unsupported software, it becomes easier for attackers to exploit known vulnerabilities. They can search for published vulnerabilities related to that version and launch attacks that specifically target those vulnerabilities.

Ask Yourself Whether

  • Version information is accessible to end users.
  • Internal systems do not benefit from timely patch management workflows.

There is a risk if you answered yes to any of these questions.

Recommended Secure Coding Practices

In general, it is recommended to keep internal technical information within internal systems to control what attackers know about the underlying architectures. This is known as the "need to know" principle.

The most effective solution is to remove version information disclosure from what end users can see, such as the "x-powered-by" header.
This can be achieved directly through the web application code, server (nginx, apache) or firewalls.

Disabling the server signature provides additional protection by reducing the amount of information available to attackers. Note, however, that this does not provide as much protection as regular updates and patches.
Security by obscurity is the least foolproof solution of all. It should never be the only defense mechanism and should always be combined with other security measures.

Sensitive Code Example

In Express.js, version information is disclosed by default in the x-powered-by HTTP header:

let express = require('express');

let example = express(); // Sensitive

example.get('/', function (req, res) {
  res.send('example')
});

Compliant Solution

x-powered-by HTTP header should be disabled in Express.js with app.disable:

let express = require('express');

let example = express();
example.disable("x-powered-by");

Or with helmet’s hidePoweredBy middleware:

let helmet = require("helmet");

let example = express();
example.use(helmet.hidePoweredBy());

See

javascript:S5148

A newly opened window having access back to the originating window could allow basic phishing attacks (the window.opener object is not null and thus window.opener.location can be set to a malicious website by the opened page).

For instance, an attacker can put a link (say: "http://example.com/mylink") on a popular website that changes, when opened, the original page to "http://example.com/fake_login". On "http://example.com/fake_login" there is a fake login page which