Veeam hard disk read slow.
Deployment is fully automated.
Veeam hard disk read slow Basically, the bottle next doesn't really lie, imo, and you can just look towards how fast can you do random read on the USB disk. The volume is hosted on a Compellent SC4020 attached via 10 Gb iSCSI, so no storage integrations. I’d also check the hypervisor’s resources, just in case it’s over provisioned, if you restored a new copy of the VM and left the original running without network The storage can be local disks, directly attached disk based storage (such as USB hard drive), or a iSCSI/FC SAN LUN in case the server is connected into the SAN fabric. exe -SHOWPROXYUSAGES is no longer working When a replication takes place, we see massive write latency on our vm stats for the veeam proxies using hotadd. Not a support forum! I seen this code for extracting the Removing I just can't believe that neither Microsoft, nor Veeam has got no solution for this! We ended up disabling CBT for our high IO VMs, deleting the MRT and RCT files for the virtual hard disks. g. If this is the case, try setting the throttling to only allow connection from the Veeam Community discussions and solutions for: Hard disk 1 (147,3 GB) 5,7 GB read at 84 MB/s 21. our support was able to isolate the issue to VMs with Resilient Changed Tracking (RCT) enabled. The data files are not there on the target or the Every time I try to have Veeam backup our SQL server, the disk latency for the SQL server increases within days after the backup. 9 TB read at 63 MB/s (GBT)" "Hard Disk 3 (500. Veeam Community discussions and solutions for: I also noticed on the servers that the DISK for read + write never go over 200MB/s during the replication. 589 as a solution to back up our barebone MS SQL Server 2017 on Windows Server 2016, backing up whole server using installed VEEAM CBT driver. Just thought of telling you guys what I'm seeing because, although the issue is exactly the same, we are using another backup solution. -when I do a restore of those VMs, including the During the most recent backup (incremental), the read speed was 174 MB/s on drive C and 83 MB/s in drive D, but for the current job in progress those speeds are 36 MB/s and 22 MB/s . Dedicated Backup Server (Veeam B and R currently installed on) - HP Proliant D320e Gen 8 - Windows Server 2012 R2 - 16GB RAM - 4 Core - 500GB Hard Disk - Internal Backup Storage - QNAP Turbo Nas TS420U 10TB Offsite External Backup Storage - Currently using Samsung M3 4TB USB 3. Running Veeam B&R console on a laptop is ok, but make Sometimes the backup job is very slow because the entire disks are read on one or more vm. 0 Update 3 host running on a i7 system with 64GB, SSD and HD storage and a 1G NIC-VBR 12. It seems those local disks are read by the target proxy via Nbd. I find backups are at a fast 93MB/s, but restores are slow at 3MB/s. 5 hosts (a file server and Exchange server on one, an SQL server and the Veeam VM on the other) Production storage is a Synology DS1513+ with four WD Red drives configured in RAID 10. I've made sure that I have no residuals left behind. If you test with like DiskSpd, what are your results? Top. The backups run in the evening If you edit the VM and look at the properties, does it show the disk properly there? Usually when I see a 0B disk, it’s actually a VM issue and not a Veeam issue. Veeam Community discussions and solutions for: "Read at 1 mb/s" of Servers & Workstations. Those two VM's always ran Based on this statistics we can say that Source Data Mover spends most time of its activity on reading data because of slow read in NBD mode. It might be slow when hotadding disk when the job start, but disk processing speed will be way faster. While the proxy is reading the data to back up, the production VM experiences higher I/O latency. I know you mentioned a VeeamZIP backup, but it was slightly less clear what feature you used to restore. Using Storage Snapshots with a physical SAN-attached proxy would not really help here, since it just changes where the read I/Os happen. It just took 5. 2020 01:05:24 :: Hard disk 2 (784,2 GB) 14,8 GB read at 3 MB/s Those two lines up there tell me that a the Copy Job processed the second Hard disk 20 times slower than Hard disk 1. It happened twice in the last 20 days, both times on Monday. Yes, at first, we have RCT enabled in the Veeam backup and found that we would need to live migrate the SQL VM to get the performance back (even after shutting down the server and removing the . Since cpu of Veeam server is < 10 % and disk usage is also low we wanted to check Veeam process usage and found out that also Veeam. I do have a ticket in with VEEAM support, but they have currently advised me to talk with VMware which I am doing about the issue. One vdisk is 16 drives and runs in raid 6. Since I need to back up the entire VM and run pre/post job scripts, I created a B&R managed agent job for it. If I run a full backup the job will show over 1000MB/s read and the repo will show the Nic at 8Gbps coming in. I'm wondering if anyone else has seen extremely slow FLR mounts after upgrading from Veeam 10 to 11. Now when I resumed them the calculating digests step is taking forever, like 60 minutes for a 50GB hard drive, this single job would take days at this rate. So we've opened a case with Microsoft Veeam Community discussions and solutions for: >6TB to our DR site however the largest of the drives was taking a significantly long time to read/copy over. Whatever I do, I cannot reach the FTP-transfer speed of 100MB/s per Harddisk. SQL sever involves high iops so the host drives may already Been under load when the vm disks are mounted into the veeam proxy to read. foggy Running into a very strange issue using backup copy jobs to create replication seed files that contain VBK/VBM. But if i manually copy the vbk file from the source repository and "paste" it to the usb 3 disk (over the share from the nas) speed is fine. 5 hours to backup 68GB of data on the file server. The device is not ready. 04. sysstat -x reports 27% disk utilization at max during VBR backup. Here’s a summary: 2 ESX 5. Also with changed block tracking entire disk should not be read during incremental job runs. Veeam Community discussions and solutions for: Hard disk fully read or only used quota? of Microsoft Hyper-V R&D Forums. It still has to read the original data from the disk, either through the VM snapshot or the storage snapshot. Veeam Community discussions and solutions for: Agent Too Slow of Veeam Agent for Microsoft Windows. Shouldn't occur normally unless you're This is not a support forum! Please do not bring environment-specific issues here but contact our Customer Support for assistance and troubleshooting. X. Veeam Community discussions and solutions for: Backup to Disk - Synthetic Full - very Slow of VMware vSphere. 0, utilizing both backup jobs and replication jobs. I want to replicate VMs to separate dedicated mirrored virtual volumes within the same stretch cluster. x Btw, when you say 30MB/s, do you mean the entire job processing rate or the hard disk read speed? With NBD transport mode, processing is limited to around 30MB/s, if I recall correctly. Another approach would be to place 10Gb at management interfaces Veeam Community discussions and solutions for: SAN Backup Slow - real slow! of VMware vSphere. Asynchronous read operation failed Failed to upload disk. 0u1d (soon to be updated to U2)-----ReFS seems to be quite a bit slower than NTFS. iSCSI LUNs are used to mount datastores in the ESXs Backup storage for the Veeam Backup jobs are external USB hard disks Disk 0 AVAGO SMC3108 SCSI Disk Device Capacity: 161 TB Formatted: 161 TB System disk: No Page file: No Read speed 82. 8GB Transfered 46. SyncDisk}. In general VDDK libraries which available in Vmware will get merged with Veeam software. The main reasons are slow data "Source 99%" means that source storage data retrieval speed is a bottleneck. X for disk Hard disk 1 [nbd] Hard disk drive 1 (200GB) 3,8GB read at 901KB/s How could I increase this speed? I have 2 sites production and DR PRODUCTION: Veeam Community discussions and solutions for: Slow transfer to cloud provider of Veeam Backup & Replication. This task reads the entire source VM from disk in order to verify the integrity of the VM data, prior to failing back to it. Veeam Community discussions and solutions for: Really slow merging to cloud storage of Veeam Backup & Replication. Manager. The bottleneck is always displayed as target Datastores ! - Using backup proxy **** for disk Hard disk 1 [san/nbd] And also hangs on trying to complete the following subtasks: - Finalizing - Coping data from disks (but showing zeroes on speed and kb/mb read) After some time (usually more than an hour) it continues without errors and eventually the whole task Is complete. VP, Product Management Posts: 27365 Veeam Community discussions and solutions for: Slow Backup Speed (7-30 MB/s) So I suggest you to try hotadd proxy (just install an extra windows VM to both of your hosts). 0. :: Swap file blocks skipped: 44. Not a support forum! Skip to content 06/11/2015 18:59:53 :: Hard disk 1 (30. Copy jobs seem to be very slow, but not as slow as before the optimization we did with Veeam Dev in V11 (case 04822737). Veeam Community discussions and solutions for: Slow performance using HOTADD mode in VBR 9. I have a VM with a 16TB, independent disk. All disks (both on source and target site) in the jobs are shown as HOTADD However its too slow, a 500GB disk took over 10 hours to calculate disk digets. Each hard disk is 3 TB capactity and utilizing 1 backup thread for 1 Hard disk. 0 GB) 45. 0 MB 18/11/2013 7:23:54 p. Bottleneck: Target Disk read (live data, since job is running now) are ~4-5MB/s per/Harddisk. Backup. a file server with 10 separate VMDK files can take hours, at 7MB/s per disk yet a VM with a single disk can transfer at >500MB/s over the same infrastructure - COFC 8GB FC According to the bottleneck stats, it is clear that the issue is write speed to the target storage. 3 ms Backend disk is 64KB ReFS on a R5 512Kb striped SSD array. 0 GB) 14. Now for disk 6 and 7 it tells me: Preparing backup proxy NL-HQ-VEEAM01 for disk Hard disk 6 [hotadd] It seems like it is stuck, because it is only running on 2 of the 4 available threads. From what I have read, Synology products offer the possibility to disable strict allocation via Hi Foggy Bottleneck is Target. However, my other Internal SSD ‘terabyte’ (X:) Terabyte (X:) (953. I'm Currently evaluating Veeam B&R 6. -ESXi 7. m. The percent busy number for this component indicates percent of time that the source disk reader spent reading the data from the storage. How are these values? Host 2 has more than one network connection connected and Veeam is using the slower connection somehow. Logged on to the proxy in the failover DC, to further analyze, I see that the proxy is communicating with the host over the network (I would expected direct disk reads since its HOTADDED??). They get read at 1MB/s if i'm lucky. But if we do a FLR or a bare metal recovery, performance will be very bad in the end. 1 GB) 262. I have 2 backup jobs that run nightly and have been going fine other than speed. The target is not there. 6GB read at 8Mb/s [CBT] - 16:01:02 Thats 16 hours! This is blowing our backup window and causes backup-replication scheduling conflicts. I have to reboot the server to get it to clear. 8 GB read at 4 MB/s. From previouse restores provided by the windows-built-in backup tool recovery was always very fast and I think this should be the goal too. One vdisk is made of 3 disks and runs in raid 5. The disk is at under 50% capacity, and from Task Manager CPU is at ~25%, RAM (32GB) is at 60%, the disk fluctuates between 0% and 20% utilization, with the disk transfer rate at 0, peaking So we see that during backup when veeam was reading and writing from and to the same disk, performance was very good. Issue ID I can only recommend to text every component independently from Veeam 1) test repository write speed 2) test network 3) test source read speed As far as I see, support already did that and figured out that the disk performance is low. You can find more detailed description of bottleneck analysis here, on Veeam Community Forums: READ THIS FIRST : [FAQ] FREQUENTLY ASKED QUESTIONS v6 - R&D Forums. Do you get Load results from a restore job? (e. it just seems a problem when veeam is copy the files This feels like disk IO bottlenecking so @regnor ’s comment around instant recovery makes sense. iSCSI LUNs are used to Disk 0 AVAGO SMC3108 SCSI Disk Device Capacity: 161 TB Formatted: 161 TB System disk: No Page file: No Read speed 82. Right now that veeam is runnning a full backup and the first 3 disks are backuped, and 2 disk are still being backuped. Ranging between 700 and 1000 ms. Not a support forum! I was able to get clarification that each backup job will need to read the entire disk, even if the VM is part of other jobs that have completed. -when I do a backup of any of the VMs, it runs at about 91MB/s. It seems only slow when the server has been in use - weekends it runs fine (about 30 mb/s read on VMs) but in the week this slows to 11 mb/s. and appears to have re-read the entire hard Veeam Community discussions and solutions for: Backup copy job is slow of Veeam Backup & Replication. :: Using Hard disk 1 (50. 0 Disk. 2 GB read at 13 MB/s 01:14:52 Veeam Server is physical with an iSCSI connection into the iSCSI network and has all mirrored DataCore ESX virtual disks presented to it in Read Only mode. Even after more than 10 hours, the file is still being written at over 100 MB/s. There are many reasons why restore can be slower than backup, for example: backup is running in SAN mode and the restore proxy works in Network mode (btw from the log I see it's NBD), slow read from the backup repository (f. 9 TB read at 41 MB/s [cbt] - Can anyone tell me how long it should take this disk to complete backups? The backup ran for 13:21 hrs and failed to complete due to backup window setting. I cannot cancel the job either. 9 TB) 1. foggy Veeam Software Posts: 21140 Using source proxy name1 for disk Hard disk 1 [hotadd] Using target proxy name2 for disk Hard disk 1 [hotadd] While the proxy is reading the data to back up, the production VM experiences higher I/O latency. rct files). Hi I've been battling this for around 2 months with HPE and Veeam Tech support One issue was that VM's with many hard disks get really slow e. Yet running backups on larger VMs will take alot more time now. During next job cycle backup copy job will transfer the data it was missing on the previous run. CID: 00681602 I hadn't yet found an easy way to specify Veeam to make a new full vs an incremental, but deleting all previous full & incremental backups on the external drive in the past had seemed to work? Here is the job summary I've been using. That's my 2 cents at least. For example, 99% busy means that Deployment is fully automated. was the bottleneck listed but I'm trying to get the remaining VM's seeded over this long weekend but I'm getting really slow speeds on the current job (8MB/s) and at that rate it will take My jobs ran fine for the last few days. Slow read is SAN mode can have different root causes depending on infrastructure specific and used hardware. Two nights ago, i started noticing that with two VM's (out of 19 being backed-up and 7 replicated), the read-speeds in their jobs, regardless if it's backup or replication, have become horrendously slow. Here the transfer speed does not slow down. 0 GB) 42. storage level deduplication), decompression is slow due to CPU load on the backup repository and many others. 0 TB) 438. FAQ; Main 5/23/2016 8:17:49 AM :: Hard disk 2 (1. , Load: Source 63% > Proxy 73% > Network 54% > Target 0%) Also - how loaded is the 10Gb link between sites? Do other systems use it? Hard disk 20 (3. If your backup copy jobs cannot complete within the sync interval, then you can either extend the interval or re-configure your backup copy jobs to send less data (use jobs with less VMs added) to the repository. In yesterday’s Word from Gostev Forum Digest he stated: “ Important Hyper-V news: for the past few months, we've been working on a strange issue with a few customers: poor I/O performance on VMs protected by Veeam. 0 GB) 26. 8GB Summary: Previously when I upgraded from V10 to V11, I had similar issue with the backups from my StoreOnce Repository to Tape where speeds were very slow and the Registry fix I used to resolve the slow backup speeds from the HP StoreOnce (Disk) to Tape: Key: DisableHtAsyncIo Path: HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\ Type: I have a small virtual setup which I’m backing up using Veeam B&R 7. Is it the way it is supposed to proceed ? Below the message I have in statistics: Using target proxy X. Veeam B&R 11 Veeam MS Agent 5 vSphere 7. Veeam Community discussions and solutions for: Export List of Hard Disk Read Times of PowerShell. VBK's from the NAS to Veeambackup server and have acceptable performance. Not a support forum! Skip to content. What is the expected duration in time per GB for this proces ? Top. 0 GB) 24. It is reading the entire disk data to identify what blocks should be transferred to the target. Btw, what is the RAID configuration on the target host? Veeam Community discussions and solutions for: Direct Storage Access FC - slow of VMware vSphere. In addition how can i confirm - Backup storage for the Veeam Backup jobs are external USB hard disks outside the server room, targets are CIFS folders on the external hard disks - Backup storage for the Veeam Replication jobs is another Synology DS1513+, targets are also iSCSI LUNs - Switches are all gigabit per port Veeam Community discussions and solutions for: "Hard disk 4 (0. That would mean the USB 3. Due to this backup window is getting extended. kbr Enthusiast Especially with hot add, where Veeam is able to read data directly from VMDK by itself, in a fashion that is optimized for We're evaluating VEEAM Agent for Windows 2. We have similar virtual backup tool where we are utilizing multiple backup thread for single hard disk. Only if your issue is confirmed to be a bug or a limitation that is worth discussing with the R&D team, you may also post it here according to the following rules: I'm contacting NAS support and will probably contact Veeam support on this - we are still seeing slow write performance on reverse incremental on the 12 and 16 disk NAS devices in use (right now as low as 4MB/s). Vitaliy S. 9 MB/s Write speed 0 KB/s Active time 5% Average response time 0. What could I be missing? FYI, my Prod site and DR site are on the same FC network. I can't configure that in my opinion. 4 TB) 54. 242 for disk Hard disk 2 [san]). mrt and . 1 GB read at 25 MB/s [CBT] 06/11/2015 19:19:17 :: Using backup proxy VMware Backup Proxy for disk Hard disk 2 [san] 06 NBD is what's slowing this down. We never see any latency on our Dell Compellent SAN, or ESX host/Veeam Proxy CPU and memory. Veeam Community discussions and solutions for: Set up BackupJob and now when first Job is Running Processing rate is extremely slow (11MB/s). Veeam Community discussions and solutions for: Backup are super slow after upgrading to Version 11. 0 Slimline Portable Hard Drive on a daily rotation. Search Hard disk 1 (40,0 GB) 2,6 GB read at 120 MB/s [CBT] Top. 0 TB) 806. Backing up locally to a Windows 2012 Veeam Server with Direct Attached Disks (MD1220) and Direct SAN access is giving me good results BUT when i replicate to our DR site the replication is really slow, even the incrementals is taking hours. Hard disk 2 (15 TB) 6,2 TB read at 75 MB/s [CBT] 23:59:11 Hard disk 2 (15 TB) 1,5 TB read at 1016 MB/s 27:18 So from slow disks to slow disks with 1016 MB/s that's ten times faster. Previous backups sessions are shown under History -> Backup -> Agent tab of Veeam console, however even incremental run should show detected source disk capacity. I Veeam Community discussions and solutions for: (10TB) from the internal hard disk to an SMB share on the NAS. Since 1 Mb/s is quite slow, I'd recommend to contact our support team and to ask our engineers to look for some hints in debug logs. Agent failed to process method {DataTransfer. Not a support forum! I can perform Read/Write within Windows with manually copying single or multiple . Top. For some background - The source volume is being read via SAN mode (1/22/2018 6:02:30 PM :: Using source proxy 192. However, unless I'm mistaken storage integration would only effect the snapshot process. Hard disk 1 (40. At random, our backups will become very slow. I've had some jobs take 15 minutes to start wheras one job has took 40 minutes before starting. 0 B) 0. m1kkel Enthusiast Posts: 47 Liked: 1 time Veeam Community discussions and solutions for: The problem we have however is that a replication done using CBT is slower than doing a full copy, and slower than a replication without CBT enabled. In our environment we have dual 40 Gbe to the switch and 10Gbe to our repo and multiple sans. The backup window for the job is set for 15 hours 5pm - 8am. I paused them to set up a few backup jobs. The read speed will depend on change blocks placement: the more random they are, the slower the changed blocks will be read from source storage. The backups run in the evening so there won't be user activity on the server. Only wan acc. At some point in the past month just after a client update, the read speed of the C drive has slowed down to ~8MB/s from a previous read speed of 150MB/s+. Hard disk 1 (80,0 GB) 9,8 GB read at 30 MB/s [CBT] Top. I've got an issue at the moment where backup jobs are taking a longt time to begin processing - they get to the "Hard disk 1 (0. There are no checkpoints on the host. Potentially, any process that interacts with a backup file may hang when trying to open a backup file. 2 posts • Here’s a summary: 2 ESX 5. So here's my question: All disks are 600gb 10k SAS disks. R&D Forums. This may be suitable for smaller VMs where backup sessions are short due to small vhdx files. Quick links. 5 TB) 1. So Veeam does a full read, and CBT still works but kicks in for the subsequent runs after this one. The only latency we see is on the veeam proxy disk statistics. If I recall correctly, I remove the disk from the VM, and then add Using my SQL server again my last Replica took approx 2-3 minutes for that machine. 9 GB read at 54 MB/s [CBT] The storage can be local disks, directly attached disk based storage (such as USB hard drive), or a iSCSI/FC SAN LUN in case the server is connected into the SAN fabric. At the very least, the disk read speed has never been this slow in the past, even with incremental backups. 23:39:17 Using backup proxy xxxx for retrieving Hard disk 1 data from storage snapshot on vdesvc01 00:07 23:39:25 Hard disk 1 (50 GB) 338 MB read at 131 MB/s [CBT] 00:08 23:39:42 Using backup proxy xxxx for retrieving Hard disk 2 data from Hi All, it’s my first post here. At least when creating the first, full backup and incrementals. 8-12-2013 23:25:29 :: Hard disk 2 (750,0 GB) 8-12-2013 23:25:29 :: Calculating disk 2 digests Is a backup copy job also using the proxy on the other side. Veeam Backup & Replication installs the following components and services: Veeam Installer Service is an auxiliary service that is installed and started on any Windows server once it is I can only recommend to text every component independently from Veeam 1) test repository write speed 2) test network 3) test source read speed As far as I see, support already did that and figured out that the disk performance is low. Not a support forum! Skip to content I am going to run a backup to a local disk from that server to see what read speeds we get; this will hopefully give better insights on if the read If you edit the VM and look at the properties, does it show the disk properly there? Usually when I see a 0B disk, it’s actually a VM issue and not a Veeam issue. of Veeam Backup & Replication R&D Forums. 2. VIB or . . Asynchronous Giving what I seeing, I don't think it has to do exclusively with ReFS or Veeam, but it seems to be a bug on how RCT is handling read operations inside the virtual disk. 0 GB read at 48 MB/s (GBT)" "Hard Disk 2 (1. My replica jobs hang on Hard disk 1 (0 B) 0 B read at 0 KB/s [CBT] They sit there indefinitely. 0 GB) #### read at 3MB/s [CBT] 18/11/2013 7:23:52 p. I’m evaluating Veeam in my lab. Veeam Community discussions and solutions for: during the process the action "Creating fingerprints for hard disk (x)" is created. When I run a backup copy job to create the seed from existing backups on disk, copying some VMs is VERY slow (taking 24-48hrs) while others are fast. The Last backup took approx 14 hours. Jobs Backup Veeam Community discussions and solutions for: performance (very very slow speed) these backup has very slow performance (read at less than10 MB/sec as speed) investigating in statistic for the job one can see in the problematic virtual machine (with two hard disk) that one disk stay for hours at "59,9 GB read at 5 MB/sec" (Hard disk is Veeam Community discussions and solutions for: slow backup speed 02612135 of Veeam Agent for Microsoft Windows. 9 GB) 3. I'm looking at one right now: Hard Disk 2 (2. Exception from server: The device is not ready. 0 GB) 500. 1420 running on an i5 system with 16GB, with two 1G NICs, NIC1 con JustinCredible, bottleneck shows the component that most of time was busy versus waiting for other stages of processing. 3 GB read at 73 MB/s [CBT] Disk Backup was done from a none deduplicated NetApp volume. Please don't forget to share support case ID. Cause: Issue with the new high-perf backup backup file interaction engine logic that can happen if a backup storage is very slow to respond to a request to open a backup file. Any advice here is appreciated, because the copy jobs for all my vm's are taking longer than the interval of the normal backup jobs. The disk is 4Tb Intel P4600 NVMe SSD, pretty fast one, with ~2Tb used in databases and system files. 8 GB read at 24 MB/s 22:30 Hard disk 2 (1. Not a support forum! Hard disk 1 (80. Can you use a Veeam proxy with virtual appliance mode (with NBD disabled) to backup that file server instead? as you are changing the disk configuration for the VM even though you are just adding a new disk. Search. 5 . 1,3 TB (102,5 GB used) Hard disk 1 (100 GB) 2,2 GB read at 103 MB/s [CBT] Hard disk 2 (300 GB) 0 B read at 0 KB/s [CBT] The corresponding log entries when the entire disks are read A planned failover/back When performing a failback after a "Planned Failover" operation, Veeam requires a task called "Calculating Original Signature Hard Disk" to be performed. Data: Processed: 969GB Read: 68. Symptoms: Backup sits at 0KB on the hard disk read step. 168. If I recall correctly, I remove the disk from the VM, and then add it back as an existing disk, and once it’s showing correctly, then Veeam can grab the disk for backups. 5 GB read at 86 MB/s [CBT] Hard disk 2 (50. 250. I benchmarked -NAS is capable of sustained 112MBs read and write speeds. The bottleneck stats is showing source and yes i do see the CBT tags at the end of the VM Hard Disk "Primary Bottleneck: Source" "Hard Disk 1 (50. One thing to consider is that when you restore from that hybrid Nimble, you're likely doing "cold reads" directly from the SATA disks; that'll slow things down a bit. We had an immutable copy at our DR site which has now been copied as the replica for the CDP job however, we are now seeing the VBR server state "Processing disk hard disk x" (0% done (you should be able to read this next to the disk read speeds) - Bottleneck stats: It will say Busy: Source,Proxy, Network, Target. I've contacted support and have spoken to 3 different level 1 techs. For example, 99% busy means that According to the log you posted, the job is using hot add to backup this particular hard disk. 0 GB read at 60 MB/s (GBT)" Veeam Community discussions and solutions for: Veeam v9 Backup Performance Slow of Veeam Backup & Replication. Installing a proxy on the VM having access to the datastore with VMs you are backing up, will allow for using hotadd source data retrieval mode, With no throttling, backup read speed fluctuates between around 60Mb/s and 140Mb/s (note - when the read speed was around 77Mb/s, network usage shown in Task Manager on the server computer for Veeam Agent was When I backup, my main (C:) Internal SSD shows: Local SSD (C:) (476. 0 B read at 0 KB/s [CBT]" and hang there for a long time before actually starting. So you just need to wait for the next run. Your direct line to Veeam R&D. You could try to force network mode for the target proxy and see whether the processing speed becomes better (hotadd can put high I/O load on the target in some cases). 0 B read at 0 KB/s [CBT]" then after took long time on this step for each disk, the backup finish successfully with processing rate between 50 MB/s and 90 MB/s. 5 GB read at 142 MB/s. azhpb enb oiqqe hstq lpbkd axhy pzdxs adnz icezwn fofrxzp