TYOC038 Update – All the people left with a broken service are not directly involved in the disk issue it had, but most likely affected by it in that the system stuttered while they were creating and didn’t create as efficiently, it looks like a provisioning issue similar to the other node earlier, I forgot the name. They never created properly. These date back from 04/11/2021 until now. We’re going to set them back to pending, adjust the due date again to make up for the lost time, and add them to the pending queue. These should still be created first on the script again but I want to be very clear that it’s not a guarantee. I don’t see any reason why they shouldn’t since it’s the same script that would select these first if they were already selected first.
So if yours never provisioned correctly, today, you will see it go back to pending and due date changed to later. Tokyo Update Then, we’re going to schedule maintenance with Tokyo to add in a new disk to TYOC038, as well as TYOC035. These nodes are both missing 2TB and have <65% CPU usage currently, so after the maintenance, creations will run again. There will be total 3.5TB space for services, so about 10GB for 384MB plan and 50GB for 2.5GB plan for a rough average of 30GB, or 116 services. There are currently 72 services we know that did not provision correctly so those will fit, plus about 44 others. TYOC033 has about 1.5TB of space, so up to another 50 here. TYOC036 already received another disk, for another 1.75TB or 58 more services. Total room for up to 152 services after TYOC038 is taken care of. Probably closer to 100 due to CPU constraints and since larger plans are left at the end due to previously discussed issue with the script. Other nodes, they still need to cool off or may already be at capacity. That means 50-100 services at the end without an immediate home. Going to go through another round of requested refunds and see if we can make it all work. TYOC037 has 1.5TB of space but it needs to calm down, we did just recently create a good amount on this one. TYOC034 has about 1.2TB of space, but same, needs to calm. TYOC039 has 660GB of space and it’s pretty calm but I don’t want to put anyone else here. TYOC040 has 690GB of space that can maybe be used but I’ll have to monitor it. (edit) Actually a lot of this space issue can be resolved quickly, I’m going to go through anyone who has a ticket open about TYOC038 which has an active network status and start cancelling/refunding people since we request tickets not be opened in these cases. Email was sent as well already. 谁能告诉我,他说的什么意思,翻译看起来奇奇怪怪的 |
就是菜鸡,开机装系统装不起来 |
都不知道什么时候解决 |
TYOC038更新–所有剩下的服务中断的人都没有直接参与它的磁盘问题,但很可能受其影响,因为他们在创建时系统卡顿,没有有效地创建,这看起来是一个类似于先前另一个节点的供应问题,我忘了名字。他们从未正常创建。这些日期从2021年11月4日到现在。我们要把它们重新设置为待定,再次调整到期日期以弥补损失的时间,并把它们添加到待定队列中。这些应该还是会再次在脚本上被首先创建,但我想非常清楚,这并不是一种保证。我看不出有什么理由不这样做,因为是同一个脚本,如果已经先选择了这些,就会先选择它们。
所以,如果你的从来没有正确的供应,今天,你会看到它回到待定,到期日改为以后。 东京更新 然后,我们将安排对东京的维护,为TYOC038以及TYOC035添加一个新的磁盘。这些节点都缺少2TB,目前CPU使用率<65%,所以在维护后,创造将再次运行。 将有总共3.5TB的空间用于服务,因此384MB的计划大约有10GB,2.5GB的计划有50GB,大致平均为30GB,或116个服务。目前我们知道有72个服务没有正确配置,所以这些服务将适合,另外还有大约44个。 TYOC033有大约1.5TB的空间,所以这里最多还有50个。 TYOC036已经收到了另一个磁盘,还有1.75TB或58个服务。 在TYOC038被处理之后,总共有152个服务的空间。由于CPU的限制,而且由于之前讨论的脚本问题,较大的计划被留在了最后,所以可能更接近于100。 其他节点,它们仍然需要冷却,或者可能已经达到容量。这意味着在最后有50-100个服务没有立即的家。我们将通过另一轮要求的退款,看看我们是否能使其全部运作。 TYOC037有1.5TB的空间,但它需要平静下来,我们最近确实在这个上面创造了一个很好的数量。 TYOC034有大约1.2TB的空间,但同样,需要平静。 TYOC039有660GB的空间,它相当平静,但我不想把其他人放在这里。 TYOC040有690GB的空间,也许可以使用,但我必须监视它。 (编辑)事实上,这个空间问题很多都可以很快解决,我打算通过任何一个对TYOC038有活动网络状态的票据开放的人,并开始取消/退款的人,因为我们要求在这些情况下不要开票。邮件也已经发出去了。 |
一台母鸡 150 个小鸡? |
老板还在加班学技术 |
39节点或或成最大赢家 |
65% |