검색
아하에서 찾은 441건의 질문
- 이비인후과의료상담Q. 종합비타민영양제 매스꺼움, 아랫배땡김 질문100%C 90mg 100%D 40mcg (1,600IU) 200%E 15mg 100%K 120mcg 100%B1 티아민 1.2mg 100%B2 리보플라빈 1.3mg 100%B3 나이아신 16mg 100%B6 1.7mg 100%엽산 400mcg DFE 100%비타민B12 9.6mcg 400%비오틴 30mcg 100%판토텐산 5mg 100%아연 11mg 100%셀레늄 55mcg 100%망간 2.3mg 100%크롬 35mcg 100%몰리브덴 45mcg 100%
- 연말정산세금·세무Q. 월급에서 1주일을 뺀 급여는 어떻게 정산되나요?참고사항시급 : 8,700원주 8시간 근무A. 과거상황 세금 떼기 전 급여(주 단위로 급여정산)A-1. 하루일당 : 8,700원x8시간=69,600원8A-2. 일주일 일당 : 417,600원x6일(주휴수당 포함)=417,600원B. 현재상황 세금 뗀 급여(한달 단위로 급여정산)월급으로 전환 후 시급 8,700원에서 한달 평균 주수인 4.345주로 계산하여 급여받음B-1. 월급 : 8,700원x209시간=1,818,300원B-2. 세금 공제 후 : 1,667,150원C. 식대 10만원C-1. 7월은 식대를 식권으로받음C-2. 8월은 식대를 돈(월급에포함)으로 받음식대가 10만원이므로 비과세 해당★★문제. 코로나로 인한 무급휴가 일주일을 제외하고 월급을 받아야 하는 상황★★(정확한 계산을위해 급여 명세서까지 첨부하였습니다)7월은 평소받는 급여고, 8월은 일주일을 제외한 급여입니다.단, 7월은 식권을 받아서 식대가 없고 8월은 식비로 받아서 급여에 식대10만원이 포함되는 상황8월에 받은 이번달 급여가 제대로 정산이 된게 맞는지 여쭙고싶습니다.별도. 제가 대충 한 계산1,667,150원(한달급여 세금 뗀 최종금액)1,667,150원 ÷ 4.345주 = 383,693.9원(일주일급여)1,667,150원 - 383,693.9원 = 1,283,456.1원(일주일급여 제외된금액)1,283,456.1 + 100,000(식비 비과세) = 1,383,456.17원이렇게 계산했습니다. 어차피 세금을 나중에떼나세금뗀 금액에서 나누나 같다고생각해서 이렇게 계산한건데 제 계산이 틀렸나요?회사 영수증에는 비과세인 식대 10 만원이 포함되어 계산됐을뿐더러 정작 포함시겨야할 417,600원(일주일수당)은 세금도안빼고 제일 마지막에 넣어서 빼버리는 계산법 자체가 안맞다고 생각하거든요. 이렇게되면 당연히 제가받을 금액이 훨씬 줄어들수밖에 없다고 생각합니다.주변에 물어보니 회사가 계산한 금액이맞다 제가한 계산이 맞다 의견이 나뉘어서요전문가분들게 조언을 구하고싶습니다.2줄 요약1. 회사가 계산한 급여 지급방식이 맞는지2. 회사에서 계산을 잘못하여 미지급한 돈은 없는지
- 영양제약·영양제Q. 비타민B 과다복용 괜찮은가요?안녕하세요.제가 현재 섭취중인 영양제중비타민B1 함량2.4mg 200%비타민B2 함량2.8mg 200%비타민B6 함량 3.2mg 213%비타민B12 함량12ug 500%을 비롯하며 비타민A 103% 비타민E182% 등 인데1일섭취량은 지키고있지만 비타민B군 중에 영양성분 기준치가200%~500%까지 있는 영양제를 매일 섭취해도 괜찮을까요?전문가 님의 답변 기다리겠습니다.감사합니다^^
- 생활꿀팁생활Q. 부산 블록체인 규제자유특구와 차세대 인터넷기술 OT-OCN 및 스마트시티 건설 ㆍ블록체인 ㆍ가상화폐거래소등 해킹방지 보완기술 적용 전망은?OT-OCN기술로 가능한가요? 또한 스마트시티 건설 ㆍ금융 ㆍ지적재산권 클라우드저장등 해킹방지 보완기술에 적용도 가능한지요?#기사참조부산이 일냈다! 세계 최초 블록체인 규제 자유특구로 지정 https://m.post.naver.com/viewer/postView.nhn?volumeNo=23767634&memberNo=25324157&searchKeyword=%EB%B6%80%EC%82%B0%20%EB%B8%94%EB%A1%9D%EC%B2%B4%EC%9D%B8%20%ED%8A%B9%EA%B5%AC&searchRank=5#기사참조해킹 사례로 보는 가상화폐와 블록체인https://m.post.naver.com/viewer/postView.nhn?volumeNo=17177426&memberNo=3185448&vType=VERTICAL#OT-OCN 박재경교수 인터뷰영상https://youtu.be/8Dr6_9EIwtk
- 생활꿀팁생활Q. 파이썬 코드 idle python 에서 오류?hiddenSize],stddev=1.0 / math.sqrt(float(hiddenSize))))b2 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01))hiddenlayer = tf.nn.relu(tf.matmul(inputlayer, W2) + b2)W3 = tf.Variable(tf.truncated_normal([hiddenSize, nbActions],stddev=1.0 / math.sqrt(float(hiddenSize))))b3 = tf.Variable(tf.truncated_normal([nbActions], stddev=0.01))outputlayer = tf.matmul(hiddenlayer, W3) + b3# True labelsY = tf.placeholder(tf.float32, [None, nbActions])# Mean squared error cost functioncost = tf.reducesum(tf.square(Y-outputlayer)) / (2*batchSize)# Stochastic Gradient Decent Optimizeroptimizer = tf.train.GradientDescentOptimizer(learningRate).minimize(cost)# Helper function: Chooses a random value between the two boundaries.def randf(s, e): return (float(random.randrange(0, (e - s) * 9999)) / 10000) + s;# The environment: Handles interactions and contains the state of the environmentclass CatchEnvironment(): def init(self, gridSize): self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.state = np.empty(3, dtype = np.uint8) # Returns the state of the environment. def observe(self): canvas = self.drawState() canvas = np.reshape(canvas, (-1,self.nbStates)) return canvas def drawState(self): canvas = np.zeros((self.gridSize, self.gridSize)) canvas[self.state[0]-1, self.state[1]-1] = 1 # Draw the fruit. # Draw the basket. The basket takes the adjacent two places to the position of basket. canvas[self.gridSize-1, self.state[2] -1 - 1] = 1 canvas[self.gridSize-1, self.state[2] -1] = 1 canvas[self.gridSize-1, self.state[2] -1 + 1] = 1 return canvas # Resets the environment. Randomly initialise the fruit position (always at the top to begin with) and bucket. def reset(self): initialFruitColumn = random.randrange(1, self.gridSize + 1) initialBucketPosition = random.randrange(2, self.gridSize + 1 - 1) self.state = np.array([1, initialFruitColumn, initialBucketPosition]) return self.getState() def getState(self): stateInfo = self.state fruit_row = stateInfo[0] fruit_col = stateInfo[1] basket = stateInfo[2] return fruitrow, fruitcol, basket # Returns the award that the agent has gained for being in the current environment state. def getReward(self): fruitRow, fruitColumn, basket = self.getState() if (fruitRow == self.gridSize - 1): # If the fruit has reached the bottom. if (abs(fruitColumn - basket) <= 1): # Check if the basket caught the fruit. return 1 else: return -1 else: return 0 def isGameOver(self): if (self.state[0] == self.gridSize - 1): return True else: return False def updateState(self, action): if (action == 1): action = -1 elif (action == 2): action = 0 else: action = 1 fruitRow, fruitColumn, basket = self.getState() newBasket = min(max(2, basket + action), self.gridSize - 1) # The min/max prevents the basket from moving out of the grid. fruitRow = fruitRow + 1 # The fruit is falling by 1 every action. self.state = np.array([fruitRow, fruitColumn, newBasket]) #Action can be 1 (move left) or 2 (move right) def act(self, action): self.updateState(action) reward = self.getReward() gameOver = self.isGameOver() return self.observe(), reward, gameOver, self.getState() # For purpose of the visual, I also return the state.# The memory: Handles the internal memory that we add experiences that occur based on agent's actions,# and creates batches of experiences based on the mini-batch size for training.class ReplayMemory: def init(self, gridSize, maxMemory, discount): self.maxMemory = maxMemory self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.discount = discount canvas = np.zeros((self.gridSize, self.gridSize)) canvas = np.reshape(canvas, (-1,self.nbStates)) self.inputState = np.empty((self.maxMemory, 100), dtype = np.float32) self.actions = np.zeros(self.maxMemory, dtype = np.uint8) self.nextState = np.empty((self.maxMemory, 100), dtype = np.float32) self.gameOver = np.empty(self.maxMemory, dtype = np.bool) self.rewards = np.empty(self.maxMemory, dtype = np.int8) self.count = 0 self.current = 0 # Appends the experience to the memory. def remember(self, currentState, action, reward, nextState, gameOver): self.actions[self.current] = action self.rewards[self.current] = reward self.inputState[self.current, ...] = currentState self.nextState[self.current, ...] = nextState self.gameOver[self.current] = gameOver self.count = max(self.count, self.current + 1) self.current = (self.current + 1) % self.maxMemory def getBatch(self, model, batchSize, nbActions, nbStates, sess, X): # We check to see if we have enough memory inputs to make an entire batch, if not we create the biggest # batch we can (at the beginning of training we will not have enough experience to fill a batch). memoryLength = self.count chosenBatchSize = min(batchSize, memoryLength) inputs = np.zeros((chosenBatchSize, nbStates)) targets = np.zeros((chosenBatchSize, nbActions)) # Fill the inputs and targets up. for i in xrange(chosenBatchSize): if memoryLength == 1: memoryLength = 2 # Choose a random memory experience to add to the batch. randomIndex = random.randrange(1, memoryLength) current_inputState = np.reshape(self.inputState[randomIndex], (1, 100)) target = sess.run(model, feeddict={X: currentinputState}) current_nextState = np.reshape(self.nextState[randomIndex], (1, 100)) currentoutputs = sess.run(model, feeddict={X: current_nextState}) # Gives us Q_sa, the max q for the next state. nextStateMaxQ = np.amax(current_outputs) if (self.gameOver[randomIndex] == True): target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] else: # reward + discount(gamma) * max_a' Q(s',a') # We are setting the Q-value for the action to r + gamma*max a' Q(s', a'). The rest stay the same # to give an error of 0 for those outputs. target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] + self.discount * nextStateMaxQ # Update the inputs and targets. inputs[i] = current_inputState targets[i] = target return inputs, targets def main(_): print("Training new model") # Define Environment env = CatchEnvironment(gridSize) # Define Replay Memory memory = ReplayMemory(gridSize, maxMemory, discount) # Add ops to save and restore all the variables. saver = tf.train.Saver() winCount = 0 with tf.Session() as sess: tf.initializeallvariables().run() for i in xrange(epoch): # Initialize the environment. err = 0 env.reset() isGameOver = False # The initial state of the environment. currentState = env.observe() while (isGameOver != True): action = -9999 # action initilization # Decides if we should choose a random action, or an action from the policy network. global epsilon if (randf(0, 1) <= epsilon): action = random.randrange(1, nbActions+1) else: # Forward the current state through the network. q = sess.run(outputlayer, feeddict={X: currentState}) # Find the max index (the chosen action). index = q.argmax() action = index + 1 # Decay the epsilon by multiplying by 0.999, not allowing it to go below a certain threshold. if (epsilon > epsilonMinimumValue): epsilon = epsilon * 0.999 nextState, reward, gameOver, stateInfo = env.act(action) if (reward == 1): winCount = winCount + 1 memory.remember(currentState, action, reward, nextState, gameOver) # Update the current state and if the game is over. currentState = nextState isGameOver = gameOver # We get a batch of training data to train the model. inputs, targets = memory.getBatch(output_layer, batchSize, nbActions, nbStates, sess, X) # Train the network which returns the error. , loss = sess.run([optimizer, cost], feeddict={X: inputs, Y: targets}) err = err + loss print("Epoch " + str(i) + ": err = " + str(err) + ": Win count = " + str(winCount) + " Win ratio = " + str(float(winCount)/float(i+1)*100)) # Save the variables to disk. save_path = saver.save(sess, os.getcwd()+"/model.ckpt") print("Model saved in file: %s" % save_path)if name == 'main': tf.app.run()""" TensorFlow translation of the torch example found here (written by SeanNaren). https://github.com/SeanNaren/TorchQLearningExample Original keras example found here (written by Eder Santana). https://gist.github.com/EderSantana/c7222daa328f0e885093#file-qlearn-py-L164 The agent plays a game of catch. Fruits drop from the sky and the agent can choose the actions left/stay/right to catch the fruit before it reaches the ground."""import tensorflow.compat.v1 as tftf.disablev2behavior()import numpy as npimport randomimport mathimport os# Parametersepsilon = 1 # The probability of choosing a random action (in training). This decays as iterations increase. (0 to 1)epsilonMinimumValue = 0.001 # The minimum value we want epsilon to reach in training. (0 to 1)nbActions = 3 # The number of actions. Since we only have left/stay/right that means 3 actions.epoch = 1001 # The number of games we want the system to run for.hiddenSize = 100 # Number of neurons in the hidden layers.maxMemory = 500 # How large should the memory be (where it stores its past experiences).batchSize = 50 # The mini-batch size for training. Samples are randomly taken from memory till mini-batch size.gridSize = 10 # The size of the grid that the agent is going to play the game on.nbStates = gridSize * gridSize # We eventually flatten to a 1d tensor to feed the network.discount = 0.9 # The discount is used to force the network to choose states that lead to the reward quicker (0 to 1) learningRate = 0.2 # Learning Rate for Stochastic Gradient Descent (our optimizer).# Create the base model.X = tf.placeholder(tf.float32, [None, nbStates])W1 = tf.Variable(tf.truncated_normal([nbStates, hiddenSize], stddev=1.0 / math.sqrt(float(nbStates))))b1 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01)) input_layer = tf.nn.relu(tf.matmul(X, W1) + b1)W2 = tf.Variable(tf.truncated_normal([hiddenSize, hiddenSize],stddev=1.0 / math.sqrt(float(hiddenSize))))b2 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01))hiddenlayer = tf.nn.relu(tf.matmul(inputlayer, W2) + b2)W3 = tf.Variable(tf.truncated_normal([hiddenSize, nbActions],stddev=1.0 / math.sqrt(float(hiddenSize))))b3 = tf.Variable(tf.truncated_normal([nbActions], stddev=0.01))outputlayer = tf.matmul(hiddenlayer, W3) + b3# True labelsY = tf.placeholder(tf.float32, [None, nbActions])# Mean squared error cost functioncost = tf.reducesum(tf.square(Y-outputlayer)) / (2*batchSize)# Stochastic Gradient Decent Optimizeroptimizer = tf.train.GradientDescentOptimizer(learningRate).minimize(cost)# Helper function: Chooses a random value between the two boundaries.def randf(s, e): return (float(random.randrange(0, (e - s) * 9999)) / 10000) + s;# The environment: Handles interactions and contains the state of the environmentclass CatchEnvironment(): def init(self, gridSize): self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.state = np.empty(3, dtype = np.uint8) # Returns the state of the environment. def observe(self): canvas = self.drawState() canvas = np.reshape(canvas, (-1,self.nbStates)) return canvas def drawState(self): canvas = np.zeros((self.gridSize, self.gridSize)) canvas[self.state[0]-1, self.state[1]-1] = 1 # Draw the fruit. # Draw the basket. The basket takes the adjacent two places to the position of basket. canvas[self.gridSize-1, self.state[2] -1 - 1] = 1 canvas[self.gridSize-1, self.state[2] -1] = 1 canvas[self.gridSize-1, self.state[2] -1 + 1] = 1 return canvas # Resets the environment. Randomly initialise the fruit position (always at the top to begin with) and bucket. def reset(self): initialFruitColumn = random.randrange(1, self.gridSize + 1) initialBucketPosition = random.randrange(2, self.gridSize + 1 - 1) self.state = np.array([1, initialFruitColumn, initialBucketPosition]) return self.getState() def getState(self): stateInfo = self.state fruit_row = stateInfo[0] fruit_col = stateInfo[1] basket = stateInfo[2] return fruitrow, fruitcol, basket # Returns the award that the agent has gained for being in the current environment state. def getReward(self): fruitRow, fruitColumn, basket = self.getState() if (fruitRow == self.gridSize - 1): # If the fruit has reached the bottom. if (abs(fruitColumn - basket) <= 1): # Check if the basket caught the fruit. return 1 else: return -1 else: return 0 def isGameOver(self): if (self.state[0] == self.gridSize - 1): return True else: return False def updateState(self, action): if (action == 1): action = -1 elif (action == 2): action = 0 else: action = 1 fruitRow, fruitColumn, basket = self.getState() newBasket = min(max(2, basket + action), self.gridSize - 1) # The min/max prevents the basket from moving out of the grid. fruitRow = fruitRow + 1 # The fruit is falling by 1 every action. self.state = np.array([fruitRow, fruitColumn, newBasket]) #Action can be 1 (move left) or 2 (move right) def act(self, action): self.updateState(action) reward = self.getReward() gameOver = self.isGameOver() return self.observe(), reward, gameOver, self.getState() # For purpose of the visual, I also return the state.# The memory: Handles the internal memory that we add experiences that occur based on agent's actions,# and creates batches of experiences based on the mini-batch size for training.class ReplayMemory: def init(self, gridSize, maxMemory, discount): self.maxMemory = maxMemory self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.discount = discount canvas = np.zeros((self.gridSize, self.gridSize)) canvas = np.reshape(canvas, (-1,self.nbStates)) self.inputState = np.empty((self.maxMemory, 100), dtype = np.float32) self.actions = np.zeros(self.maxMemory, dtype = np.uint8) self.nextState = np.empty((self.maxMemory, 100), dtype = np.float32) self.gameOver = np.empty(self.maxMemory, dtype = np.bool) self.rewards = np.empty(self.maxMemory, dtype = np.int8) self.count = 0 self.current = 0 # Appends the experience to the memory. def remember(self, currentState, action, reward, nextState, gameOver): self.actions[self.current] = action self.rewards[self.current] = reward self.inputState[self.current, ...] = currentState self.nextState[self.current, ...] = nextState self.gameOver[self.current] = gameOver self.count = max(self.count, self.current + 1) self.current = (self.current + 1) % self.maxMemory def getBatch(self, model, batchSize, nbActions, nbStates, sess, X): # We check to see if we have enough memory inputs to make an entire batch, if not we create the biggest # batch we can (at the beginning of training we will not have enough experience to fill a batch). memoryLength = self.count chosenBatchSize = min(batchSize, memoryLength) inputs = np.zeros((chosenBatchSize, nbStates)) targets = np.zeros((chosenBatchSize, nbActions)) # Fill the inputs and targets up. for i in xrange(chosenBatchSize): if memoryLength == 1: memoryLength = 2 # Choose a random memory experience to add to the batch. randomIndex = random.randrange(1, memoryLength) current_inputState = np.reshape(self.inputState[randomIndex], (1, 100)) target = sess.run(model, feeddict={X: currentinputState}) current_nextState = np.reshape(self.nextState[randomIndex], (1, 100)) currentoutputs = sess.run(model, feeddict={X: current_nextState}) # Gives us Q_sa, the max q for the next state. nextStateMaxQ = np.amax(current_outputs) if (self.gameOver[randomIndex] == True): target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] else: # reward + discount(gamma) * max_a' Q(s',a') # We are setting the Q-value for the action to r + gamma*max a' Q(s', a'). The rest stay the same # to give an error of 0 for those outputs. target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] + self.discount * nextStateMaxQ # Update the inputs and targets. inputs[i] = current_inputState targets[i] = target return inputs, targets def main(_): print("Training new model") # Define Environment env = CatchEnvironment(gridSize) # Define Replay Memory memory = ReplayMemory(gridSize, maxMemory, discount) # Add ops to save and restore all the variables. saver = tf.train.Saver() winCount = 0 with tf.Session() as sess: tf.initializeallvariables().run() for i in xrange(epoch): # Initialize the environment. err = 0 env.reset() isGameOver = False # The initial state of the environment. currentState = env.observe() while (isGameOver != True): action = -9999 # action initilization # Decides if we should choose a random action, or an action from the policy network. global epsilon if (randf(0, 1) <= epsilon): action = random.randrange(1, nbActions+1) else: # Forward the current state through the network. q = sess.run(outputlayer, feeddict={X: currentState}) # Find the max index (the chosen action). index = q.argmax() action = index + 1 # Decay the epsilon by multiplying by 0.999, not allowing it to go below a certain threshold. if (epsilon > epsilonMinimumValue): epsilon = epsilon * 0.999 nextState, reward, gameOver, stateInfo = env.act(action) if (reward == 1): winCount = winCount + 1 memory.remember(currentState, action, reward, nextState, gameOver) # Update the current state and if the game is over. currentState = nextState isGameOver = gameOver # We get a batch of training data to train the model. inputs, targets = memory.getBatch(output_layer, batchSize, nbActions, nbStates, sess, X) # Train the network which returns the error이. , loss = sess.run([optimizer, cost], feeddict={X: inputs, Y: targets}) err = err + loss print("Epoch " + str(i) + ": err = " + str(err) + ": Win count = " + str(winCount) + " Win ratio = " + str(float(winCount)/float(i+1)*100)) # Save the variables to disk. save_path = saver.save(sess, os.getcwd()+"/model.ckpt") print("Model saved in file: %s" % save_path)if name == 'main': tf.app.run() 입니다그런데 이런 오류가 생겼습니다WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\compat\v2compat.py:65: disableresourcevariables (from tensorflow.python.ops.variablescope) is deprecated and will be removed in a future version.Instructions for updating:non-resource variables are not supported in the long termTraining new modelWARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\util\tfshoulduse.py:198: initializeallvariables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.Instructions for updating:Use tf.globalvariablesinitializer instead.W0820 22:17:13.656675 9068 deprecation.py:323] From C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\util\tfshoulduse.py:198: initializeallvariables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.Instructions for updating:Use tf.globalvariablesinitializer instead.Traceback (most recent call last): File "C:\Windows\system32\python", line 267, in <module> tf.app.run() File "C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\platform\app.py", line 40, in run run(main=main, argv=argv, flagsparser=parseflagstolerateundef) File "C:\ProgramData\Anaconda3\envs\tens_2\lib\site-packages\absl\app.py", line 299, in run runmain(main, args) File "C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\absl\app.py", line 250, in run_main sys.exit(main(argv)) File "C:\Windows\system32\python", line 216, in main for i in xrange(epoch):NameError: name 'xrange' is not defined어떻게 해결해야 할까요?매우 길지만 해결해 주시면 감사하겠습니다 ㅠㅠ
- 생활꿀팁생활Q. 객체와 배열 질문입니다. javascriptfunction select(arr, obj) { // 배열과 객체를 입력받아서 배열인덱스 값의 요소가 객체의 key가 되는 함수입니다. const arr = ['a', 'c', 'e']; const obj = { a: 1, b: 2, c: 3, d: 4 }; // 입력 받은 배열과 객체 let result = {}; for(let key in obj) for(let n = 0; n<=arr.length; n++){ if(arr[n] === key) { result[key] = obj[key]; //저는 result = obj[key]로 작성했는데 틀렸다고 해서 한참 고민하다가 //해설을 보니 result[key] = obj[key] 맞다고해서요 //차이가 뭔가요 ㅠ result는 이미 빈객체인 상태아닌가요? } } return result;}
- 생활꿀팁생활Q. casper ffg에서 질문입니다!!된다.이렇게 알고있습니다.r-> b2 -> b3 -> b4가 메인 체인이라고 할때질문 1 . b3가 justified되었으므로 b2는 finalized 되었을것입니다.그렇다면 b2가 100번째 블록이라면 b3는 150번째 블록일 것입니다.(직계자손체크포인트이므로)그런데 a2,a3,가 어떻게 b2, b3 사이에서 연결될수가 있는건가요??(50배수 블록은 이미 b2,b3인데 a2,a3는 50배 블록이 아니여도 상관이 없는건가요?)아니면 단순히 a2,와 a3는 justified가 된것이 아니라 그냥 이어지기만 한것인가요??저 전체적인 그림이 어떻게 구현이 될수있는지 이해가 안갑니다.질문 2 . FFG에서 무조건 #50 블록 -> #200 블록 이런식으로 다음 체크포인트 블록이 아닌 어느정도 건너뛰어서 연결될수있나요??(justified되고 finalized되는것까지)질문3 . 저러한 상황이 일어날 이유가 궁금합니다.기존의 메인체인보다 작은 높이의 블록의 투표를 할 이유가 있나요??어차피 fork choice rule은 가장 긴 justified된 블록을 할텐데... 저 방법으로 인한 어떠한 공격방법이나 나타날수있는 현상을 말씀해주시면 감사하겠습니다.질문4 . 저러한 상황을 막지않으면 충돌이 어떻게 발생되는지 궁금합니다.
- 내과의료상담Q. 제가 먹는 영양제 복용 순서 및 궁합이 맞지 않는것이 있나요?있는 것 같습니다..현재는 아침에 한번에 섭취하고 있습니다.1. 비타민B2. 비타민D3. 유산균4. 오메가3 or 크릴오일5. 마카6. 칼슘&마그네슘7. 라스베라셀8. 비타민C&콜라겐추가적으로 약 한달전부터 다이어트 한약을 복용하고 있습니다. 아침/점심 식후 한포, 저녁 식전 한포, 취침전 알약하나 입니다.또한 약 일년전부터 예방차원에서 피나스테리드를 복용하고 있습니다만, 특별히 문제는 없었습니다.시작하고 2-3주 정도는 정말 닭가슴살/계란/샐러드 정도만 섭취하였고 한달간 약 5-6키로 감량하였습니다. 조깅도 1주에 2-3회 30분정도 하고 있습니다.단순 심리적인 문제일지 아니면 혹시 여기에 뭔가 원인이 있을까 싶어..문의 드립니다.감사합니다.
- 약 복용약·영양제Q. 수용성 비타민과 지용성 비타민1) 흔이 알려진 비타민 B군과 C군이 수용성 비타민 이라고 하는데 세포호흡네 사용되는 FAD 관련 비타민B2 와 NAD 관련 비타민 B3가 몸에 기력이 없거나 힘들때 먹으면 전자전달계에 도움을 주어서 힘이 난다고 배웠습니다. 제가 궁금한 것은 수용성 이기 때문에 많이 먹어도 상관이 없나요? 이왕 에너지원으로 공급 할려면 많이 먹는게 좋지 않나요? 2) 지용성 비타민 D 군은 간과 신장을 통해 인산염 흡수을 촉진 한다고 배웠습니다. 하지만 지용성이라 많이 먹기 꺼려 지는데 권고량 만큼만 먹어 주는 것이 좋나요? 아니면 조금더 양을 늘려서 먹어도 되나요? 혹여나 양 조절을 잘못하여 신장 결석등을 유발할것이 걱정 됩니다.
- 양도소득세세금·세무Q. 장기임대주택 양도소득세 및 매매 질문입니다현 아파트 a,b 2채 보유중이며A는 8년 거주후 전세주고 [06년도 구입](매입가 3억, 현 공시가 4억 실거래 매매가 8억)A 전세주고 1년후에 장기임대주택등록해서 지금 4년정도 지났는데B는 매매후 거주중입니다. (5년째) [13년도 구입](매입가 5억중반 현 공시가 5억 실거래 매매가 9억2천)이 경우1. 장기임대주택을 먼저 팔경우 양도소득세가 발생하는지2. 현거주 주택을 팔경우 양도소득세가 발생하는지 (9억 초과분에 대한 양도소득세 제외)궁금하고3. 장기임대주택을 판매시 구입자가 임대사업자를 유지해야하는지도 궁금합니다.4.양도세를 최소화하는 방법및 증여시 유리한점이 궁금합니다.