# Invocation command line: # /home/cpu2017_rate/bin/harness/runcpu --configfile amd_rate_aocc400_genoa_B1.cfg --tune all --reportable --iterations 3 --nopower --runmode rate --tune base:peak --size test:train:refrate intrate # output_root was not used for this run ############################################################################ ################################################################################ # AMD AOCC 400 SPEC CPU 2017 V1.1.8 Rate Configuration File for 64-bit Linux # # File name : amd_rate_aocc400_genoa_B1.cfg # Creation Date : October 6, 2022 # CPU 2017 Version : 1.1.8 # Supported benchmarks : All Rate benchmarks (intrate, fprate) # Compiler name/version : AOCC 4.0.0 # Operating system version : RHEL 8.6 # Supported OS's : SLE 15 SP4, Ubuntu 22.04, RHEL 9.0, RHEL 8.6 # Hardware : AMD Genoa (AMD64) # FP Base Pointer Size : 64-bit # FP Peak Pointer Size : 64-bit # INT Base Pointer Size : 64-bit # INT Peak Pointer Size : 32/64-bit # Auto Parallelization : No # # Note: DO NOT EDIT THIS FILE, the only edits required to properly run these # binaries are made in the ini Python file. Please consult Readme.amd_rate_aocc400_genoa_B1.txt # for a few uncommon exceptions which require edits to this file. # # Description: # # This binary package automates away many of the complexities necessary to set # up and run SPEC CPU 2017 under optimized conditions on AMD Genoa-based # server platforms within Linux (AMD64). # # The binary package was built specifically for AMD Genoa microprocessors and # is not intended to run on other products. # # Please install the binary package by following the instructions in # "Readme.amd_rate_aocc400_genoa_B1.txt" under the "How To Use the Binaries" section. # # The binary package is designed to work without alteration on two socket AMD # Genoa-based servers with 96 cores per socket, SMT enabled and 1.5 TiB of DDR5 # memory distributed evenly among all 24 channels using 64 GiB DIMMs. # # To run the binary package on other Genoa configurations, please review # "Readme.amd_rate_aocc400_genoa_B1.txt". In general, Genoa CPUs # should be autodetected with no action required by the user. # # In most cases, it should be unnecessary to edit "amd_rate_aocc400_genoa_B1.cfg" or any # other file besides "ini_amd_rate_aocc400_genoa_B1.py" where reporting fields # and run conditions are set. # # The run script automatically sets the optimal number of rate copies and binds # them appropriately. # # The run script and accompanying binary package are designed to work on Ubuntu # 22.04. # # Important! If you write your own run script, please set the stack size to # "unlimited" when executing this binary package. Failure to do so may cause # some benchmarks to overflow the stack. For example, to set stack size within # the bash shell, include the following line somewhere at the top of your run # script before the runcpu invocation: # # ulimit -s unlimited # # Modification of this config file should only be necessary if you intend to # rebuild the binaries. General instructions for rebuilding the binaries are # found in-line below. # ################################################################################ # Modifiable macros: ################################################################################ # "allow_build"" switch: # Change the following line to true if you intend to REBUILD the binaries (AMD # does not support this). Valid values are "true" or "false" (no quotes). %define allow_build false # Only change these macros if you are rebuilding the binary package: %define compiler_name aocc400 %define binary_package_name amd_rate_%{compiler_name}_genoa_B %define binary_package_ext %{binary_package_name} %define binary_package_revision 1 %define build_path /home/work/cpu2017/v118/aocc4/b1/rate %define flags_file_name %{compiler_name}-flags.xml # Do NOT change build_lib_dir after the build or it will trigger a # rebuild of the xalanc. It should also remain literal: %define build_lib_dir %{binary_package_name}_lib # To enable the platform file, be sure to uncomment the flagsurl02 header line # below in the Header settings. %define platform_file_name INVALID_platform_%{binary_package_name}.xml ################################################################################ # You should never have to change binary_package_full_name: %define binary_package_full_name %{binary_package_name}%{binary_package_revision} ################################################################################ # Include file names ################################################################################ # The include file contains fields that are commonly changed. This file is auto- # generated based upon INI file settings and should not need user modification # for runs. The flags include file contains all of the compiler flags. %define inc_file_name %{binary_package_full_name}.inc %define flags_inc_file_name %{binary_package_full_name}_flags.inc # Binary label extension: # Only modify the binary label extension if you plan to rebuild the binaries. # If you plan to recompile these CPU 2017 binaries, please choose a new extension # name below to avoid confusion with the current binary set on your system # under test, and to avoid confusion for SPEC submission reviewers. You will # also need to set "allow_build" to true above. Finally, you must modify the # Paths section below to point to your library locations if the paths are not # already set up in your build environment. # Note that AMD calls an external script to set up the compiler and library # paths before initiating the build. %define ext %{binary_package_ext} ################################################################################ # Paths and Environment Variables # ** MODIFY AS NEEDED (modification should not be necessary for runs) ** ################################################################################ # Allow environment variables to be set before runs: preenv = 1 # Necessary to avoid out-of-memory exceptions on certain SUTs: preENV_MALLOC_CONF = retain:true # Define the name of the directory that holds AMD library files: %define lib_dir %{binary_package_name}_lib # Set the shared object library path for runs and builds: preENV_LD_LIBRARY_PATH = $[top]/%{lib_dir}/lib:$[top]/%{lib_dir}/lib32:%{ENV_LD_LIBRARY_PATH} # Define 32-bit library build paths: # Do NOT use $[top] with the 32-bit libraries because doing so will cause an # options checksum error triggering a xalanc recompile attempt on SUTs having # different file paths. # Do NOT change build_lib_dir after the build or it will also trigger a # rebuild of the xalanc: AMDALLOC_LIB32_PATH = %{build_path}/%{build_lib_dir}/lib32 %if '%{allow_build}' eq 'false' # The include file is only needed for runs, but not for builds. # include: %{inc_file_name} # ----- Begin inclusion of 'amd_rate_aocc400_genoa_B1.inc' ############################################################################ ################################################################################ ################################################################################ # File name: amd_rate_aocc400_genoa_B1.inc # File generation code date: September 23, 2022 # File generation date/time: April 07, 2022 / 05:30:57 # # This file is automatically generated during a SPEC CPU2017 run. # # To modify inc file generation, please consult the readme file or the run # script. ################################################################################ ################################################################################ ################################################################################ ################################################################################ # The following macros are generated for use in the cfg file. ################################################################################ ################################################################################ %define logical_core_count 48 %define physical_core_count 24 ################################################################################ # The following inc blocks set the rate copy counts and affinity settings. # # intrate benchmarks: 500.perlbench_r,502.gcc_r,505.mcf_r,520.omnetpp_r, # 523.xalancbmk_r,525.x264_r,531.deepsjeng_r,541.leela_r,548.exchange2_r, # 557.xz_r # fpspeed benchmarks: 503.bwaves_r,507.cactuBSSN_r,519.lbm_r,521.wrf_r, # 527.cam4_r,538.imagick_r,544.nab_r,549.fotonik3d_r,554.roms_r # # Selected copy counts from '9254' section of CPU info ################################################################################ # default copy counts: default: copies = 48 # Bind commands for assigning affinity: bind0 = numactl --localalloc --physcpubind=0 bind1 = numactl --localalloc --physcpubind=1 bind2 = numactl --localalloc --physcpubind=2 bind3 = numactl --localalloc --physcpubind=3 bind4 = numactl --localalloc --physcpubind=4 bind5 = numactl --localalloc --physcpubind=5 bind6 = numactl --localalloc --physcpubind=6 bind7 = numactl --localalloc --physcpubind=7 bind8 = numactl --localalloc --physcpubind=8 bind9 = numactl --localalloc --physcpubind=9 bind10 = numactl --localalloc --physcpubind=10 bind11 = numactl --localalloc --physcpubind=11 bind12 = numactl --localalloc --physcpubind=12 bind13 = numactl --localalloc --physcpubind=13 bind14 = numactl --localalloc --physcpubind=14 bind15 = numactl --localalloc --physcpubind=15 bind16 = numactl --localalloc --physcpubind=16 bind17 = numactl --localalloc --physcpubind=17 bind18 = numactl --localalloc --physcpubind=18 bind19 = numactl --localalloc --physcpubind=19 bind20 = numactl --localalloc --physcpubind=20 bind21 = numactl --localalloc --physcpubind=21 bind22 = numactl --localalloc --physcpubind=22 bind23 = numactl --localalloc --physcpubind=23 bind24 = numactl --localalloc --physcpubind=24 bind25 = numactl --localalloc --physcpubind=25 bind26 = numactl --localalloc --physcpubind=26 bind27 = numactl --localalloc --physcpubind=27 bind28 = numactl --localalloc --physcpubind=28 bind29 = numactl --localalloc --physcpubind=29 bind30 = numactl --localalloc --physcpubind=30 bind31 = numactl --localalloc --physcpubind=31 bind32 = numactl --localalloc --physcpubind=32 bind33 = numactl --localalloc --physcpubind=33 bind34 = numactl --localalloc --physcpubind=34 bind35 = numactl --localalloc --physcpubind=35 bind36 = numactl --localalloc --physcpubind=36 bind37 = numactl --localalloc --physcpubind=37 bind38 = numactl --localalloc --physcpubind=38 bind39 = numactl --localalloc --physcpubind=39 bind40 = numactl --localalloc --physcpubind=40 bind41 = numactl --localalloc --physcpubind=41 bind42 = numactl --localalloc --physcpubind=42 bind43 = numactl --localalloc --physcpubind=43 bind44 = numactl --localalloc --physcpubind=44 bind45 = numactl --localalloc --physcpubind=45 bind46 = numactl --localalloc --physcpubind=46 bind47 = numactl --localalloc --physcpubind=47 submit = echo "$command" > run.sh ; $BIND bash run.sh ################################################################################ ################################################################################ # fprate copy counts: fprate: copies = 48 # Bind commands for assigning affinity: bind0 = numactl --localalloc --physcpubind=0 bind1 = numactl --localalloc --physcpubind=1 bind2 = numactl --localalloc --physcpubind=2 bind3 = numactl --localalloc --physcpubind=3 bind4 = numactl --localalloc --physcpubind=4 bind5 = numactl --localalloc --physcpubind=5 bind6 = numactl --localalloc --physcpubind=6 bind7 = numactl --localalloc --physcpubind=7 bind8 = numactl --localalloc --physcpubind=8 bind9 = numactl --localalloc --physcpubind=9 bind10 = numactl --localalloc --physcpubind=10 bind11 = numactl --localalloc --physcpubind=11 bind12 = numactl --localalloc --physcpubind=12 bind13 = numactl --localalloc --physcpubind=13 bind14 = numactl --localalloc --physcpubind=14 bind15 = numactl --localalloc --physcpubind=15 bind16 = numactl --localalloc --physcpubind=16 bind17 = numactl --localalloc --physcpubind=17 bind18 = numactl --localalloc --physcpubind=18 bind19 = numactl --localalloc --physcpubind=19 bind20 = numactl --localalloc --physcpubind=20 bind21 = numactl --localalloc --physcpubind=21 bind22 = numactl --localalloc --physcpubind=22 bind23 = numactl --localalloc --physcpubind=23 bind24 = numactl --localalloc --physcpubind=24 bind25 = numactl --localalloc --physcpubind=25 bind26 = numactl --localalloc --physcpubind=26 bind27 = numactl --localalloc --physcpubind=27 bind28 = numactl --localalloc --physcpubind=28 bind29 = numactl --localalloc --physcpubind=29 bind30 = numactl --localalloc --physcpubind=30 bind31 = numactl --localalloc --physcpubind=31 bind32 = numactl --localalloc --physcpubind=32 bind33 = numactl --localalloc --physcpubind=33 bind34 = numactl --localalloc --physcpubind=34 bind35 = numactl --localalloc --physcpubind=35 bind36 = numactl --localalloc --physcpubind=36 bind37 = numactl --localalloc --physcpubind=37 bind38 = numactl --localalloc --physcpubind=38 bind39 = numactl --localalloc --physcpubind=39 bind40 = numactl --localalloc --physcpubind=40 bind41 = numactl --localalloc --physcpubind=41 bind42 = numactl --localalloc --physcpubind=42 bind43 = numactl --localalloc --physcpubind=43 bind44 = numactl --localalloc --physcpubind=44 bind45 = numactl --localalloc --physcpubind=45 bind46 = numactl --localalloc --physcpubind=46 bind47 = numactl --localalloc --physcpubind=47 submit = echo "$command" > run.sh ; $BIND bash run.sh ################################################################################ # Switch back to the default block after the include file: default: ################################################################################ ################################################################################ ################################################################################ ################################################################################ # The remainder of this file defines CPU2017 report parameters. ################################################################################ ################################################################################ ################################################################################ # SPEC CPU 2017 report header ################################################################################ license_num =3 tester =HPE test_sponsor =HPE hw_vendor =Hewlett Packard Enterprise hw_model000 =ProLiant DL325 Gen11 hw_model001 =(2.90 GHz, AMD EPYC 9254) #--------- If you install new compilers, edit this section -------------------- sw_compiler =C/C++/Fortran: Version 4.0.0 of AOCC ################################################################################ ################################################################################ # Hardware, firmware and software information ################################################################################ hw_avail =Dec-2022 sw_avail =Nov-2022 hw_cpu_name =AMD EPYC 9254 hw_cpu_nominal_mhz =2900 hw_cpu_max_mhz =4150 hw_ncores =24 hw_nthreadspercore =2 hw_ncpuorder =1 chip hw_other =None # Other perf-relevant hw, or "None" fw_bios000 =HPE BIOS Version v1.12 11/24/2022 released fw_bios001 = Nov-2022 sw_base_ptrsize =64-bit hw_pcache =32 KB I + 32 KB D on chip per core hw_scache =1 MB I+D on chip per core hw_tcache000 =128 MB I+D on chip per chip, hw_tcache001 = 32 MB shared / 6 cores hw_ocache =None sw_other =None ################################################################################ # Notes ################################################################################ # Enter notes_000 through notes_100 here. notes_000 =Binaries were compiled on a system with 2x AMD EPYC 9174F CPU + 1.5TiB Memory using RHEL 8.6 notes_005 = notes_010 =NA: The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown) notes_015 =is mitigated in the system as tested and documented. notes_020 =Yes: The test sponsor attests, as of date of publication, that CVE-2017-5753 (Spectre variant 1) notes_025 =is mitigated in the system as tested and documented. notes_030 =Yes: The test sponsor attests, as of date of publication, that CVE-2017-5715 (Spectre variant 2) notes_035 =is mitigated in the system as tested and documented. notes_040 = notes_submit_000 ='numactl' was used to bind copies to the cores. notes_submit_005 =See the configuration file for details. notes_submit_010 = notes_os_000 ='ulimit -s unlimited' was used to set environment stack size limit notes_os_005 ='ulimit -l 2097152' was used to set environment locked pages in memory limit notes_os_010 = notes_os_015 =runcpu command invoked through numactl i.e.: notes_os_020 =numactl --interleave=all runcpu notes_os_025 = notes_os_030 =To limit dirty cache to 8% of memory, 'sysctl -w vm.dirty_ratio=8' run as root. notes_os_035 =To limit swap usage to minimum necessary, 'sysctl -w vm.swappiness=1' run as root. notes_os_040 =To free node-local memory and avoid remote memory usage, notes_os_045 ='sysctl -w vm.zone_reclaim_mode=1' run as root. notes_os_050 =To clear filesystem caches, 'sync; sysctl -w vm.drop_caches=3' run as root. notes_os_055 =To disable address space layout randomization (ASLR) to reduce run-to-run notes_os_060 =variability, 'sysctl -w kernel.randomize_va_space=0' run as root. notes_os_065 = notes_comp_000 =The AMD64 AOCC Compiler Suite is available at notes_comp_005 =http://developer.amd.com/amd-aocc/ notes_comp_010 = # notes_jemalloc_000 =jemalloc: configured and built with GCC v4.8.2 in RHEL 7.4 (No options specified) # notes_jemalloc_005 =jemalloc 5.1.0 is available here: # notes_jemalloc_010 =https://github.com/jemalloc/jemalloc/releases/download/5.1.0/jemalloc-5.1.0.tar.bz2 # notes_jemalloc_015 = # sw_other000 =jemalloc: jemalloc memory allocator library v5.1.0 ################################################################################ # The following note fields describe platorm settings. ################################################################################ # example: (edit and uncomment as necessary) # notes_plat_000 =BIOS settings: # notes_plat_002 = TDP: 400 # notes_plat_004 = Determinism Slider set to Power # notes_plat_006 = PPT: 400 # notes_plat_010 = NPS: 4 # notes_plat_011 = Workload Profile = CPU Intensive # notes_plat_012 = TSME = Disabled # notes_plat_014 = SEV Control = Disabled # notes_plat_015 = Fan Speed: Maximum ################################################################################ # The following are custom fields: ################################################################################ # Use custom_fields to enter lines that are not listed here. For example: # notes_plat_100 = Energy Bias set to Max Performance # new_field = Ambient temperature set to 10C ################################################################################ # The following fields must be set here for only Int benchmarks. ################################################################################ intrate: sw_peak_ptrsize =32/64-bit notes_os_thp_000 =To enable Transparent Hugepages (THP) only on request for base runs, notes_os_thp_005 ='echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' run as root. notes_os_thp_010 =To enable THP for all allocations for peak runs, notes_os_thp_015 ='echo always > /sys/kernel/mm/transparent_hugepage/enabled' and notes_os_thp_020 ='echo always > /sys/kernel/mm/transparent_hugepage/defrag' run as root. notes_os_thp_025 = ################################################################################ # The following fields must be set here for FP benchmarks. ################################################################################ fprate: sw_peak_ptrsize =64-bit notes_os_thp_000 =To enable Transparent Hugepages (THP) for all allocations, notes_os_thp_001 ='echo always > /sys/kernel/mm/transparent_hugepage/enabled' and notes_os_thp_002 ='echo always > /sys/kernel/mm/transparent_hugepage/defrag' run as root. notes_os_thp_003 = ################################################################################ # The following fields must be set here or they will be overwritten by sysinfo. ################################################################################ intrate,fprate: hw_disk =1 x 960 GB SATA SSD hw_memory000 =768 GB (12 x 64 GB 2Rx4 PC5-4800B-R) hw_memory002 = hw_nchips =1 prepared_by =HPE Performance Engineering sw_file =xfs sw_os000 =Red Hat Enterprise Linux 9.0 (Plow) sw_os001 =Kernel 5.14.0-70.13.1.el9_0.x86_64 sw_state =Run level 3 (multi-user) ################################################################################ # End of inc file ################################################################################ # Switch back to the default block after the include file: default: # ---- End inclusion of '/home/cpu2017_rate/config/amd_rate_aocc400_genoa_B1.inc' # Switch back to default block after the include file: default: fail_build = 0 # FIX THIS SO THAT CHECKSUMS WILL BE ENFORCED! %elif '%{allow_build}' eq 'true' # If you intend to rebuild, be sure to set the library paths either in the # build script or here: preENV_LIBRARY_PATH = $[top]/%{build_lib_dir}/lib:$[top]/%{build_lib_dir}/lib32:%{ENV_LIBRARY_PATH} % define build_ncpus 64 # controls number of simultaneous compiles fail_build = 0 makeflags = --jobs=%{build_ncpus} --load-average=%{build_ncpus} %else % error The value of "allow_build" is %{allow_build}, but it can only be "true" or "false". This error was generated %endif ################################################################################ # Enable automated data collection per benchmark ################################################################################ # Data collection is not enabled for reportable runs. # teeout is necessary to get data collection stdout into the logs. Best # practices for the individual data collection items would be to have # them store important output in separate files. Filenames could be # constructed from $SPEC (environment), $lognum (result number from runcpu), # and benchmark name/number. teeout = yes # Run runcpu with '-v 35' (or greater) to log lists of variables which can # be used in substitutions as below. # For CPU2006, change $label to $ext %define data-collection-parameters benchname='$name' benchnum='$num' benchmark='$benchmark' iteration=$iter size='$size' tune='$tune' label='$label' log='$log' lognum='$lognum' from_runcpu='$from_runcpu' %define data-collection-start $[top]/data-collection/data-collection start %{data-collection-parameters} %define data-collection-stop $[top]/data-collection/data-collection stop %{data-collection-parameters} monitor_specrun_wrapper = %{data-collection-start} ; $command ; %{data-collection-stop} ################################################################################ # Header settings ################################################################################ backup_config = 0 # set to 0 if you do not want backup files bench_post_setup = sync # command_add_redirect: If set, the generated ${command} will include # redirection operators (stdout, stderr), which are passed along to the shell # that executes the command. If this variable is not set, specinvoke does the # redirection. command_add_redirect = yes env_vars = yes flagsurl000 = http://www.spec.org/cpu2017/flags/HPE-Platform-Flags-AMD-Genoa-rev2.1.xml flagsurl001 = http://www.spec.org/cpu2017/flags/aocc400-flags.xml #flagsurl02 = $[top]/%{platform_file_name} # label: User defined extension string that tags your binaries & directories: label = %{ext} line_width = 1020 log_line_width = 1020 mean_anyway = yes output_format = all reportable = yes size = test,train,ref teeout = yes teerunout = yes tune = base,peak ################################################################################ # Include the flags file: ################################################################################ #include: %{flags_inc_file_name} # ----- Begin inclusion of 'amd_rate_aocc400_genoa_B1_flags.inc' ############################################################################ ################################################################################ # AMD AOCC 4.0.0 SPEC CPU2017 V1.1.8 Rate Configuration Flags for AMD64 Linux ################################################################################ # Compilers ################################################################################ default: CC = clang -m64 CXX = clang++ -m64 FC = flang -m64 CLD = clang -m64 CXXLD = clang++ -m64 FLD = flang -m64 CC_VERSION_OPTION = --version CXX_VERSION_OPTION = --version FC_VERSION_OPTION = --version ################################################################################ # Portability Flags ################################################################################ default: # data model applies to all benchmarks EXTRA_PORTABILITY = -DSPEC_LP64 # *** Benchmark-specific portability *** # Anything other than the data model is only allowed where a need is proven. # (ordered by last 2 digits of benchmark number) 500.perlbench_r: #lang='C' PORTABILITY = -DSPEC_LINUX_X64 521.wrf_r: #lang='F,C' CPORTABILITY = -DSPEC_CASE_FLAG FPORTABILITY = -Mbyteswapio 523.xalancbmk_r: #lang='CXX' PORTABILITY = -DSPEC_LINUX 526.blender_r: #lang='CXX,C' CPORTABILITY = -funsigned-char 527.cam4_r: #lang='F,C' PORTABILITY = -DSPEC_CASE_FLAG ################################################################################ # Default libraries and variables ################################################################################ default: # Libraries: EXTRA_LIBS = -lamdalloc -lamdlibm -lm MATHLIBOPT = #clearing this variable or else SPEC will set it to -lm VECMATHLIB = -fveclib=AMDLIBM # Variables: OPT_ROOT = -march=znver4 $(VECMATHLIB) -ffast-math OPT_ROOT_BASE = -O3 $(OPT_ROOT) OPT_ROOT_PEAK = -Ofast $(OPT_ROOT) -flto #Ofast enables -ffast-math ############################################################################### # AOCC 4.0.0 workarounds that do not count as PORTABILITY ################################################################################ # The workarounds in this section would not qualify under the SPEC CPU # PORTABILITY rule. # - In peak, they can be set as needed for individual benchmarks. # - In base, individual settings are not allowed; set for whole suite. # Use EXTRA_CFLAGS, EXTRA_CXXFLAGS, and EXTRA_FFLAGS for them. # # See: # https://www.spec.org/cpu2017/Docs/runrules.html#portability # https://www.spec.org/cpu2017/Docs/runrules.html#BaseFlags ####################### # Default workarounds # ####################### default: # Allow unused compile/link arguments without triggering warnings during build: EXTRA_CFLAGS = -Wno-unused-command-line-argument EXTRA_CXXFLAGS = -Wno-unused-command-line-argument EXTRA_FFLAGS = -Wno-unused-command-line-argument LDOPTIONS = -Wno-unused-command-line-argument #################### # Base workarounds # #################### # # *** NONE *** # ############################## # Integer workarounds - base # ############################## # intrate=base: # The following is necessary for 502/602 gcc: EXTRA_LDFLAGS = -z muldefs ######################### # FP workarounds - base # ######################### # # *** NONE *** # #################### # Peak workarounds # #################### # # *** NONE *** # ############################## # Integer workarounds - peak # ############################## 502.gcc_r=peak: #lang='C' EXTRA_CFLAGS = -Wno-unused-command-line-argument \ -fgnu89-inline EXTRA_LDFLAGS = -z muldefs ##################################### # Floating Point workarounds - peak # ##################################### # # *** NONE *** # ################################################################################ # Tuning Flags ################################################################################ ##################### # Base tuning flags # ##################### default=base: COPTIMIZE = $(OPT_ROOT_BASE) \ -flto \ -fstruct-layout=7 \ -mllvm -unroll-threshold=50 \ -mllvm -inline-threshold=1000 \ -fremap-arrays \ -fstrip-mining \ -mllvm -reduce-array-computations=3 \ -zopt CXXOPTIMIZE = $(OPT_ROOT_BASE) \ -flto \ -mllvm -unroll-threshold=100\ -finline-aggressive \ -mllvm -loop-unswitch-threshold=200000 \ -mllvm -reduce-array-computations=3 \ -zopt FOPTIMIZE = $(OPT_ROOT_BASE) \ -flto \ -Kieee \ -Mrecursive \ -funroll-loops \ -mllvm -lsr-in-nested-loop \ -mllvm -reduce-array-computations=3 \ -fepilog-vectorization-of-inductions \ -zopt LDCXXFLAGS = -Wl,-mllvm -Wl,-x86-use-vzeroupper=false LDCFLAGS = -Wl,-mllvm -Wl,-ldist-scalar-expand \ -fenable-aggressive-gather LDFLAGS = -flto \ -Wl,-mllvm -Wl,-align-all-nofallthru-blocks=6 \ -Wl,-mllvm -Wl,-reduce-array-computations=3 LDFFLAGS = -Wl,-mllvm -Wl,-enable-X86-prefetching # Libraries: EXTRA_LIBS = -lamdlibm -lm -lamdalloc -lflang EXTRA_FLIBS = # Don't put the AMD and mvec math libraries in MATH_LIBS because it will trigger a reporting issue # because GCC won't use them. Forcefeed all benchmarks the math libraries in EXTRA_LIBS and clear # out MATH_LIBS. MATH_LIBS = ######################## # intrate tuning flags # ######################## intrate: FOPTIMIZE = $(OPT_ROOT_BASE) \ -flto \ -fepilog-vectorization-of-inductions \ -mllvm -optimize-strided-mem-cost \ -floop-transform \ -mllvm -unroll-aggressive \ -mllvm -unroll-threshold=500 EXTRA_CXXOPTIMIZE = -fvirtual-function-elimination \ -fvisibility=hidden # LDCXXFLAGS is left empty as intrate CPP bmks have to use VZEROUPPER # instruction which is the default. LDCXXFLAGS = LDFFLAGS = -Wl,-mllvm -Wl,-inline-recursion=4 \ -Wl,-mllvm -Wl,-lsr-in-nested-loop \ -Wl,-mllvm -Wl,-enable-iv-split # Libraries: EXTRA_LIBS = -lamdlibm -lm -lflang EXTRA_CLIBS = -lamdalloc EXTRA_CXXLIBS = -lamdalloc-ext EXTRA_FLIBS = -lamdalloc ##################### # Peak tuning flags # ##################### default=peak: COPTIMIZE = $(OPT_ROOT_PEAK) \ -fstruct-layout=7 \ -mllvm -unroll-threshold=50 -fremap-arrays \ -fstrip-mining \ -mllvm -inline-threshold=1000 \ -mllvm -reduce-array-computations=3 \ -zopt CXXOPTIMIZE = $(OPT_ROOT_PEAK) \ -finline-aggressive \ -mllvm -unroll-threshold=100 \ -mllvm -reduce-array-computations=3 \ -zopt FOPTIMIZE = $(OPT_ROOT_PEAK) \ -Mrecursive \ -mllvm -reduce-array-computations=3 \ -fepilog-vectorization-of-inductions \ -zopt LDFFLAGS = -Wl,-mllvm -Wl,-enable-X86-prefetching LDFLAGS = -flto \ -Wl,-mllvm -Wl,-align-all-nofallthru-blocks=6 \ -Wl,-mllvm -Wl,-reduce-array-computations=3 LDCXXFLAGS = -Wl,-mllvm -Wl,-x86-use-vzeroupper=false feedback = 0 PASS1_CFLAGS = -fprofile-instr-generate PASS2_CFLAGS = -fprofile-instr-use PASS1_FFLAGS = -fprofile-generate PASS2_FFLAGS = -fprofile-use PASS1_CXXFLAGS = -fprofile-instr-generate PASS2_CXXFLAGS = -fprofile-instr-use PASS1_LDFLAGS = -fprofile-instr-generate PASS2_LDFLAGS = -fprofile-instr-use fdo_run1 = $command ; llvm-profdata merge --output=default.profdata *.profraw # Libraries: EXTRA_LIBS = -lamdlibm -lm -lamdalloc EXTRA_FLIBS = -lflang # Benchmark specific peak tuning flags: 500.perlbench_r=peak: #lang='C' COPTIMIZE = $(OPT_ROOT_PEAK) \ -fstruct-layout=7 \ -mllvm -unroll-threshold=50 \ -fremap-arrays \ -mllvm -inline-threshold=1000 \ -mllvm -reduce-array-computations=3 \ -faggressive-loop-transform \ -fvector-transform \ -fscalar-transform feedback = 1 502.gcc_r=peak: #lang='C' EXTRA_PORTABILITY = -D_FILE_OFFSET_BITS=64 CC = clang -m32 CLD = clang -m32 -L/usr/lib32 EXTRA_LIBS = -L$[AMDALLOC_LIB32_PATH] -lamdalloc MATHLIBOPT = -lm LDFLAGS = -flto 507.cactuBSSN_r=peak: CXXOPTIMIZE = $(OPT_ROOT_PEAK) \ -mllvm -unroll-threshold=100 \ -mllvm -loop-unswitch-threshold=200000 \ -finline-aggressive \ -mllvm -reduce-array-computations=3 \ -faggressive-loop-transform \ -fvector-transform \ -fscalar-transform EXTRA_LIBS += $(EXTRA_FLIBS) #adding flang libs to cxx linker 510.parest_r=peak: LDFLAGS = -flto \ -Wl,-mllvm -Wl,-suppress-fmas 511.povray_r=peak: #lang='CXX,C' COPTIMIZE = $(OPT_ROOT_BASE) \ -flto \ -fstruct-layout=7 \ -mllvm -unroll-threshold=50 \ -mllvm -inline-threshold=1000 \ -fremap-arrays \ -mllvm -reduce-array-computations=3 \ -zopt CXXOPTIMIZE = $(OPT_ROOT_BASE) \ -flto \ -mllvm -unroll-threshold=100\ -finline-aggressive \ -mllvm -loop-unswitch-threshold=200000 \ -mllvm -reduce-array-computations=3 \ -zopt LDCXXFLAGS = -Wl,-mllvm -Wl,-x86-use-vzeroupper=false LDCFLAGS = -Wl,-mllvm -Wl,-ldist-scalar-expand \ -fenable-aggressive-gather EXTRA_LIBS = -lamdlibm -lm -lamdalloc 520.omnetpp_r=peak: #lang='CXX` EXTRA_LIBS = -lamdlibm -lm -lamdalloc-ext 523.xalancbmk_r=peak: #lang='CXX` CXX = clang++ -m32 CXXLD = clang++ -m32 -L/usr/lib32 EXTRA_CXXOPTIMIZE = -mllvm -do-block-reorder=aggressive \ -fvirtual-function-elimination \ -fvisibility=hidden LDCXXFLAGS = -Wl,-mllvm -Wl,-do-block-reorder=aggressive \ -fno-loop-reroll EXTRA_LIBS = -L$[AMDALLOC_LIB32_PATH] -lamdalloc-ext ENV_MALLOC_CONF = thp:never 525.x264_r=peak: #lang='C' basepeak = yes 527.cam4_r=peak: COPTIMIZE = $(OPT_ROOT_BASE) \ -flto \ -fstruct-layout=7 \ -mllvm -unroll-threshold=50 \ -mllvm -inline-threshold=1000 \ -fremap-arrays \ -mllvm -reduce-array-computations=3 \ -zopt FOPTIMIZE = $(OPT_ROOT_BASE) \ -Kieee \ -Mrecursive \ -funroll-loops \ -mllvm -lsr-in-nested-loop \ -mllvm -reduce-array-computations=3 \ -fepilog-vectorization-of-inductions \ -zopt LDCFLAGS = -Wl,-mllvm -Wl,-ldist-scalar-expand \ -fenable-aggressive-gather LDFFLAGS = -Wl,-mllvm -Wl,-enable-X86-prefetching 531.deepsjeng_r=peak: #lang='CXX' CXXOPTIMIZE = $(OPT_ROOT_BASE) \ -flto \ -mllvm -unroll-threshold=100\ -finline-aggressive \ -mllvm -loop-unswitch-threshold=200000 \ -mllvm -reduce-array-computations=3 \ -zopt EXTRA_CXXOPTIMIZE = -fvirtual-function-elimination \ -fvisibility=hidden LDCXXFLAGS = EXTRA_LIBS = -lamdlibm -lm EXTRA_CXXLIBS = -lamdalloc-ext 544.nab_r=peak: LDFLAGS = -flto \ -Wl,-mllvm -Wl,-ldist-scalar-expand \ -fenable-aggressive-gather 549.fotonik3d_r=peak: FOPTIMIZE = $(OPT_ROOT_PEAK) \ -Kieee \ -Mrecursive \ -mllvm -reduce-array-computations=3 \ -fepilog-vectorization-of-inductions \ -fvector-transform \ -fscalar-transform # ---- End inclusion of '/home/cpu2017_rate/config/amd_rate_aocc400_genoa_B1_flags.inc' # The following settings were obtained by running the sysinfo_program # 'specperl $[top]/bin/sysinfo' (sysinfo:SHA:679c83684f6f4fc369a093999b6661d0a378911de2a006d3245423ad80d3fb9a) default: notes_plat_sysinfo_000 = notes_plat_sysinfo_005 = Sysinfo program /home/cpu2017_rate/bin/sysinfo notes_plat_sysinfo_010 = Rev: r6622 of 2021-04-07 982a61ec0915b55891ef0e16acafc64d notes_plat_sysinfo_015 = running on localhost.localdomain Thu Apr 7 05:31:08 2022 notes_plat_sysinfo_020 = notes_plat_sysinfo_025 = SUT (System Under Test) info as seen by some common utilities. notes_plat_sysinfo_030 = For more information on this section, see notes_plat_sysinfo_035 = https://www.spec.org/cpu2017/Docs/config.html#sysinfo notes_plat_sysinfo_040 = notes_plat_sysinfo_045 = From /proc/cpuinfo notes_plat_sysinfo_050 = model name : AMD EPYC 9254 24-Core Processor notes_plat_sysinfo_055 = 1 "physical id"s (chips) notes_plat_sysinfo_060 = 48 "processors" notes_plat_sysinfo_065 = cores, siblings (Caution: counting these is hw and system dependent. The following notes_plat_sysinfo_070 = excerpts from /proc/cpuinfo might not be reliable. Use with caution.) notes_plat_sysinfo_075 = cpu cores : 24 notes_plat_sysinfo_080 = siblings : 48 notes_plat_sysinfo_085 = physical 0: cores 0 1 2 3 4 5 8 9 10 11 12 13 16 17 18 19 20 21 24 25 26 27 28 29 notes_plat_sysinfo_090 = notes_plat_sysinfo_095 = From lscpu from util-linux 2.37.4: notes_plat_sysinfo_100 = Architecture: x86_64 notes_plat_sysinfo_105 = CPU op-mode(s): 32-bit, 64-bit notes_plat_sysinfo_110 = Address sizes: 52 bits physical, 57 bits virtual notes_plat_sysinfo_115 = Byte Order: Little Endian notes_plat_sysinfo_120 = CPU(s): 48 notes_plat_sysinfo_125 = On-line CPU(s) list: 0-47 notes_plat_sysinfo_130 = Vendor ID: AuthenticAMD notes_plat_sysinfo_135 = BIOS Vendor ID: Advanced Micro Devices, Inc. notes_plat_sysinfo_140 = Model name: AMD EPYC 9254 24-Core Processor notes_plat_sysinfo_145 = BIOS Model name: AMD EPYC 9254 24-Core Processor notes_plat_sysinfo_150 = CPU family: 25 notes_plat_sysinfo_155 = Model: 17 notes_plat_sysinfo_160 = Thread(s) per core: 2 notes_plat_sysinfo_165 = Core(s) per socket: 24 notes_plat_sysinfo_170 = Socket(s): 1 notes_plat_sysinfo_175 = Stepping: 1 notes_plat_sysinfo_180 = BogoMIPS: 5791.45 notes_plat_sysinfo_185 = Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr notes_plat_sysinfo_190 = pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt notes_plat_sysinfo_195 = pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid notes_plat_sysinfo_200 = aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe notes_plat_sysinfo_205 = popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a notes_plat_sysinfo_210 = misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb notes_plat_sysinfo_215 = bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs notes_plat_sysinfo_220 = ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f notes_plat_sysinfo_225 = avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw notes_plat_sysinfo_230 = avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total notes_plat_sysinfo_235 = cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt notes_plat_sysinfo_240 = lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter notes_plat_sysinfo_245 = pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke notes_plat_sysinfo_250 = avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 notes_plat_sysinfo_255 = rdpid overflow_recov succor smca fsrm flush_l1d notes_plat_sysinfo_260 = Virtualization: AMD-V notes_plat_sysinfo_265 = L1d cache: 768 KiB (24 instances) notes_plat_sysinfo_270 = L1i cache: 768 KiB (24 instances) notes_plat_sysinfo_275 = L2 cache: 24 MiB (24 instances) notes_plat_sysinfo_280 = L3 cache: 128 MiB (4 instances) notes_plat_sysinfo_285 = NUMA node(s): 4 notes_plat_sysinfo_290 = NUMA node0 CPU(s): 0-5,24-29 notes_plat_sysinfo_295 = NUMA node1 CPU(s): 12-17,36-41 notes_plat_sysinfo_300 = NUMA node2 CPU(s): 18-23,42-47 notes_plat_sysinfo_305 = NUMA node3 CPU(s): 6-11,30-35 notes_plat_sysinfo_310 = Vulnerability Itlb multihit: Not affected notes_plat_sysinfo_315 = Vulnerability L1tf: Not affected notes_plat_sysinfo_320 = Vulnerability Mds: Not affected notes_plat_sysinfo_325 = Vulnerability Meltdown: Not affected notes_plat_sysinfo_330 = Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via notes_plat_sysinfo_335 = prctl notes_plat_sysinfo_340 = Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user notes_plat_sysinfo_345 = pointer sanitization notes_plat_sysinfo_350 = Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, notes_plat_sysinfo_355 = STIBP always-on, RSB filling notes_plat_sysinfo_360 = Vulnerability Srbds: Not affected notes_plat_sysinfo_365 = Vulnerability Tsx async abort: Not affected notes_plat_sysinfo_370 = notes_plat_sysinfo_375 = From lscpu --cache: notes_plat_sysinfo_380 = NAME ONE-SIZE ALL-SIZE WAYS TYPE LEVEL SETS PHY-LINE COHERENCY-SIZE notes_plat_sysinfo_385 = L1d 32K 768K 8 Data 1 64 1 64 notes_plat_sysinfo_390 = L1i 32K 768K 8 Instruction 1 64 1 64 notes_plat_sysinfo_395 = L2 1M 24M 8 Unified 2 2048 1 64 notes_plat_sysinfo_400 = L3 32M 128M 16 Unified 3 32768 1 64 notes_plat_sysinfo_405 = notes_plat_sysinfo_410 = /proc/cpuinfo cache data notes_plat_sysinfo_415 = cache size : 1024 KB notes_plat_sysinfo_420 = notes_plat_sysinfo_425 = From numactl --hardware notes_plat_sysinfo_430 = WARNING: a numactl 'node' might or might not correspond to a physical chip. notes_plat_sysinfo_435 = available: 4 nodes (0-3) notes_plat_sysinfo_440 = node 0 cpus: 0 1 2 3 4 5 24 25 26 27 28 29 notes_plat_sysinfo_445 = node 0 size: 193286 MB notes_plat_sysinfo_450 = node 0 free: 192646 MB notes_plat_sysinfo_455 = node 1 cpus: 12 13 14 15 16 17 36 37 38 39 40 41 notes_plat_sysinfo_460 = node 1 size: 193533 MB notes_plat_sysinfo_465 = node 1 free: 193091 MB notes_plat_sysinfo_470 = node 2 cpus: 18 19 20 21 22 23 42 43 44 45 46 47 notes_plat_sysinfo_475 = node 2 size: 193496 MB notes_plat_sysinfo_480 = node 2 free: 193075 MB notes_plat_sysinfo_485 = node 3 cpus: 6 7 8 9 10 11 30 31 32 33 34 35 notes_plat_sysinfo_490 = node 3 size: 193486 MB notes_plat_sysinfo_495 = node 3 free: 193079 MB notes_plat_sysinfo_500 = node distances: notes_plat_sysinfo_505 = node 0 1 2 3 notes_plat_sysinfo_510 = 0: 10 12 12 12 notes_plat_sysinfo_515 = 1: 12 10 12 12 notes_plat_sysinfo_520 = 2: 12 12 10 12 notes_plat_sysinfo_525 = 3: 12 12 12 10 notes_plat_sysinfo_530 = notes_plat_sysinfo_535 = From /proc/meminfo notes_plat_sysinfo_540 = MemTotal: 792375196 kB notes_plat_sysinfo_545 = HugePages_Total: 0 notes_plat_sysinfo_550 = Hugepagesize: 2048 kB notes_plat_sysinfo_555 = notes_plat_sysinfo_560 = /sbin/tuned-adm active notes_plat_sysinfo_565 = Current active profile: throughput-performance notes_plat_sysinfo_570 = notes_plat_sysinfo_575 = From /etc/*release* /etc/*version* notes_plat_sysinfo_580 = os-release: notes_plat_sysinfo_585 = NAME="Red Hat Enterprise Linux" notes_plat_sysinfo_590 = VERSION="9.0 (Plow)" notes_plat_sysinfo_595 = ID="rhel" notes_plat_sysinfo_600 = ID_LIKE="fedora" notes_plat_sysinfo_605 = VERSION_ID="9.0" notes_plat_sysinfo_610 = PLATFORM_ID="platform:el9" notes_plat_sysinfo_615 = PRETTY_NAME="Red Hat Enterprise Linux 9.0 (Plow)" notes_plat_sysinfo_620 = ANSI_COLOR="0;31" notes_plat_sysinfo_625 = redhat-release: Red Hat Enterprise Linux release 9.0 (Plow) notes_plat_sysinfo_630 = system-release: Red Hat Enterprise Linux release 9.0 (Plow) notes_plat_sysinfo_635 = system-release-cpe: cpe:/o:redhat:enterprise_linux:9::baseos notes_plat_sysinfo_640 = notes_plat_sysinfo_645 = uname -a: notes_plat_sysinfo_650 = Linux localhost.localdomain 5.14.0-70.13.1.el9_0.x86_64 #1 SMP PREEMPT Thu Apr 14 notes_plat_sysinfo_655 = 12:42:38 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux notes_plat_sysinfo_660 = notes_plat_sysinfo_665 = Kernel self-reported vulnerability status: notes_plat_sysinfo_670 = notes_plat_sysinfo_675 = CVE-2018-12207 (iTLB Multihit): Not affected notes_plat_sysinfo_680 = CVE-2018-3620 (L1 Terminal Fault): Not affected notes_plat_sysinfo_685 = Microarchitectural Data Sampling: Not affected notes_plat_sysinfo_690 = CVE-2017-5754 (Meltdown): Not affected notes_plat_sysinfo_695 = CVE-2018-3639 (Speculative Store Bypass): Mitigation: Speculative Store notes_plat_sysinfo_700 = Bypass disabled via prctl notes_plat_sysinfo_705 = CVE-2017-5753 (Spectre variant 1): Mitigation: usercopy/swapgs notes_plat_sysinfo_710 = barriers and __user pointer notes_plat_sysinfo_715 = sanitization notes_plat_sysinfo_720 = CVE-2017-5715 (Spectre variant 2): Mitigation: Retpolines, IBPB: notes_plat_sysinfo_725 = conditional, IBRS_FW, STIBP: notes_plat_sysinfo_730 = always-on, RSB filling notes_plat_sysinfo_735 = CVE-2020-0543 (Special Register Buffer Data Sampling): Not affected notes_plat_sysinfo_740 = CVE-2019-11135 (TSX Asynchronous Abort): Not affected notes_plat_sysinfo_745 = notes_plat_sysinfo_750 = run-level 3 Apr 7 05:30 notes_plat_sysinfo_755 = notes_plat_sysinfo_760 = SPEC is set to: /home/cpu2017_rate notes_plat_sysinfo_765 = Filesystem Type Size Used Avail Use% Mounted on notes_plat_sysinfo_770 = /dev/mapper/rhel-home xfs 819G 58G 761G 8% /home notes_plat_sysinfo_775 = notes_plat_sysinfo_780 = From /sys/devices/virtual/dmi/id notes_plat_sysinfo_785 = Vendor: HPE notes_plat_sysinfo_790 = Product: ProLiant DL325 Gen11 notes_plat_sysinfo_795 = Product Family: ProLiant notes_plat_sysinfo_800 = Serial: DL325G11-010 notes_plat_sysinfo_805 = notes_plat_sysinfo_810 = Additional information from dmidecode 3.3 follows. WARNING: Use caution when you notes_plat_sysinfo_815 = interpret this section. The 'dmidecode' program reads system data which is "intended to notes_plat_sysinfo_820 = allow hardware to be accurately determined", but the intent may not be met, as there are notes_plat_sysinfo_825 = frequent changes to hardware, firmware, and the "DMTF SMBIOS" standard. notes_plat_sysinfo_830 = Memory: notes_plat_sysinfo_835 = 10x Hynix HMCG94AEBRA103N 64 GB 2 rank 4800 notes_plat_sysinfo_840 = 2x Hynix HMCG94MEBRA121N 64 GB 2 rank 4800 notes_plat_sysinfo_845 = notes_plat_sysinfo_850 = BIOS: notes_plat_sysinfo_855 = BIOS Vendor: HPE notes_plat_sysinfo_860 = BIOS Version: 1.12 notes_plat_sysinfo_865 = BIOS Date: 11/24/2022 notes_plat_sysinfo_870 = BIOS Revision: 1.12 notes_plat_sysinfo_875 = Firmware Revision: 1.10 notes_plat_sysinfo_880 = notes_plat_sysinfo_885 = (End of data from sysinfo program) hw_cpu_name = AMD EPYC 9254 hw_disk = 819 GB add more disk info here hw_memory001 = 755.668 GB fixme: If using DDR4, the format is: hw_memory002 = 'N GB (N x N GB nRxn PC4-nnnnX-X)' hw_nchips = 1 prepared_by = root (is never output, only tags rawfile) sw_file = xfs sw_os001 = Red Hat Enterprise Linux release 9.0 (Plow) sw_state = Run level 3 (add definition here) # End of settings added by sysinfo_program 541.leela_r: # The following setting was inserted automatically as a result of # post-run basepeak application. basepeak = 1 520.omnetpp_r: # The following setting was inserted automatically as a result of # post-run basepeak application. basepeak = 1 505.mcf_r: # The following setting was inserted automatically as a result of # post-run basepeak application. basepeak = 1 500.perlbench_r: # The following setting was inserted automatically as a result of # post-run basepeak application. basepeak = 1 # The following section was added automatically, and contains settings that # did not appear in the original configuration file, but were added to the # raw file after the run. default: power_management000 = BIOS and OS set to prefer performance at power_management001 = the cost of additional power usage notes_plat_000 =BIOS Configuration notes_plat_005 = Workload Profile set to General Throughput Compute notes_plat_010 = Determinism Control set to Manual notes_plat_015 = Performance Determinism set to Power Deterministic notes_plat_020 = Last-Level Cache (LLC) as NUMA Node set to Enabled notes_plat_025 = NUMA memory domains per socket set to Four memory domains per socket notes_plat_030 = ACPI CST C2 Latency set to 18 microseconds notes_plat_035 = Thermal Configuration set to Maximum Cooling notes_plat_040 = notes_plat_045 = notes_plat_050 =The system ROM used for this result contains microcode version 0xa10110e for the notes_plat_055 =AMD EPYC 9nn4X family of processors. The reference code/AGESA version used in this notes_plat_060 =ROM is version GenoaPI 1.0.0.1-L6